Phil 4.25.18

7:00 – 3:30 ASRC MKT

  • Google’s Workshop on AI/ML Research and Practice in India:
    Ganesh Ramakrishnan (IIT Bombay) presented research on human assisted machine learning.
  • From I to We: Group Formation and Linguistic Adaption in an Online Xenophobic Forum
    • Much of identity formation processes nowadays takes place online, indicating that intergroup differentiation may be found in online communities. This paper focuses on identity formation processes in an open online xenophobic, anti-immigrant, discussion forum. Open discussion forums provide an excellent opportunity to investigate open interactions that may reveal how identity is formed and how individual users are influenced by other users. Using computational text analysis and Linguistic Inquiry Word Count (LIWC), our results show that new users change from an individual identification to a group identification over time as indicated by a decrease in the use of “I” and increase in the use of “we”. The analyses also show increased use of “they” indicating intergroup differentiation. Moreover, the linguistic style of new users became more similar to that of the overall forum over time. Further, the emotional content decreased over time. The results indicate that new users on a forum create a collective identity with the other users and adapt to them linguistically.
    • Social influence is broadly defined as any change – emotional, behavioral, or attitudinal – that has its roots in others’ real or imagined presence (Allport, 1954). (pg 77)
    • Regardless of why an individual displays an observable behavioral change that is in line with group norms, social identification with a group is the basis for the change. (pg 77)
    • In social psychological terms, a group is defined as more than two people that share certain goals (Cartwright & Zander, 1968). (pg 77)
    • Processes of social identification, intergroup differentiation and social influence have to date not been studied in online forums. The aim of the present research is to fill this gap and provide information on how such processes can be studied through language used on the forum. (pg 78)
    • The popularity of social networking sites has increased immensely during the last decade. At the same time, offline socializing has shown a decline (Duggan & Smith, 2013). Now, much of the socializing actually takes place online (Ganda, 2014). In order to be part of an online community, the individual must socialize with other users. Through such socializing, individuals create self-representations (Enli & Thumim, 2012). Hence, the processes of identity formation, may to a large extent take place on the Internet in various online forums. (pg 78)
    • For instance, linguistic analyses of American Nazis have shown that use of third person plural pronouns (they, them, their) is the single best predictor of extreme attitudes (Pennebaker & Chung, 2008). (pg 79)
    • Because language can be seen as behavior (Fiedler, 2008), it may be possible to study processes of social influence through linguistic analysis. Thus, our second hypothesis is that the linguistic style of new users will become increasingly similar to the linguistic style of the overall forum over time (H2). (pg 79)
    • This indicates that the content of the posts in an online forum may also change over time as arguments become more fine-tuned and input from both supporting and contradicting members are integrated into an individual’s own beliefs. This is likely to result (linguistically) in an increase in indicators of cognitive complexity. Hence, we hypothesize that the content of the posts will change over time, such that indicators of complex thinking will increase (H3a). (pg 80)
      • I’m not sure what to think about this. I expect from dimension reduction, that as the group becomes more aligned, the overall complex thinking will reduce, and the outliers will leave, at least in the extreme of a stampede condition.
    • This result indicates that after having expressed negativity in the forum, the need for such expressions should decrease. Hence, we expect that the content of the posts will change such that indicators of negative emotions will decrease, over time (H3b). (pg 80)
    • the forum is presented as a “very liberal forum”, where people are able to express their opinions, whatever they may be. This “extreme liberal” idea implies that there is very little censorship the forum is presented as a “very liberal forum”, where people are able to express their opinions, whatever they may be. This “extreme liberal” idea implies that there is very little censorship, which has resulted in that the forum is highly xenophobic. Nonetheless, due to its liberal self-presentation, the xenophobic discussions are not unchallenged. For example, also anti-racist people join this forum in order to challenge individuals with xenophobic attitudes. This means that the forum is not likely to function as a pure echo chamber, because contradicting arguments must be met with own arguments. Hence, individuals will learn from more experienced users how to counter contradicting arguments in a convincing way. Hence, they are likely to incorporate new knowledge, embrace input and contribute to evolving ideas and arguments. (pg 81)
      • Open debate can lead to the highest level of polarization (M&D)
      • There isn’t diverse opinion. The conversation is polarized, with opponents pushing towards the opposite pole. The question I’d like to see answered is has extremism increased in the forum?
    • Natural language analyses of anonymous social media forums also circumvent social desirability biases that may be present in traditional self-rating research, which is a particular important concern in relation to issues related to outgroups (Maass, Salvi, Arcuri, & Semin, 1989; von Hippel, Sekaquaptewa, & Vargas, 1997, 2008). The to-be analyzed media uses “aliases”, yielding anonymity of the users and at the same time allow us to track individuals over time and analyze changes in communication patterns. (pg 81)
      • After seeing “Ready Player One”, I also wonder if the aliases themselves could be looked at using an embedding space built from the terms used by the users? Then you get distance measurements, t-sne projections, etc.
    • Linguistic Inquiry Word Count (LIWC; Pennebaker et al., 2007; Chung & Pennebaker, 2007; Pennebaker, 2011b; Pennebaker, Francis, & Booth, 2001) is a computerized text analysis program that computes a LIWC score, i.e., the percentage of various language categories relative to the number of total words (see also www.liwc.net). (pg 81)
      • LIWC2015 ($90) is the gold standard in computerized text analysis. Learn how the words we use in everyday language reveal our thoughts, feelings, personality, and motivations. Based on years of scientific research, LIWC2015 is more accurate, easier to use, and provides a broader range of social and psychological insights compared to earlier LIWC versions
    • Figure 1c shows words overrepresented in later posts, i.e. words where the usage of the words correlates positively with how long the users has been active on the forum. The words here typically lack emotional content and are indicators of higher complexity in language. Again, this analysis provides preliminary support for the idea that time on the forum is related to more complex thinking, and less emotionality.
      • WordCloud
    • The second hypothesis was that the linguistic style of new users would become increasingly similar to other users on the forum over time. This hypothesis is evaluated by first z-transforming each LIWC score, so that each has a mean value of zero and a standard deviation of one. Then we measure how each post differs from the standardized values by summing the absolute z-values over all 62 LIWC categories from 2007. Thus, low values on these deviation scores indicate that posts are more prototypical, or highly similar, to what other users write. These deviation scores are analyzed in the same way as for Hypothesis 1 (i.e., by correlating each user score with the number of days on the forum, and then t-testing whether the correlations are significantly different from zero). In support of the hypothesis, the results show an increase in similarity, as indicated by decreasing deviation scores (Figure 2). The mean correlation coefficient between this measure and time on the forum was -.0086, which is significant, t(11749) = -3.77, p < 0.001. (pg 85)
      • ForumAlignmentI think it is reasonable to consider this a measure of alignment
    • Because individuals form identities online and because we see this in the use of pronouns, we also expected to see tendencies of social influence and adaption. This effect was also found, such that individuals’ linguistic style became increasingly similar to other users’ linguistic style over time. Past research has shown that accommodation of communication style occurs automatically when people connect to people or groups they like (Giles & Ogay 2007; Ireland et al., 2011), but also that similarity in communicative style functions as cohesive glue within a group (Reid, Giles, & Harwood, 2005). (pg 86)
    • Still, the results could not confirm an increase in cognitive complexity. It is difficult to determine why this was not observed even though a general trend to conform to the linguistic style on the forum was observed. (pg 87)
      • This is what I would expect. As alignment increases, complexity, as expressed by higher dimensional thinking should decrease.
    • This idea would also be in line with previous research that has shown that expressing oneself decreases arousal (Garcia et al., 2016). Moreover, because the forum is not explicitly racist, individuals may have simply adapted to the social norms on the forum prescribing less negative emotional displays. Finally, a possible explanation for the decrease in negative emotional words might be that users who are very angry leave the forum, because of its non-racist focus, and end up in more hostile forums. An interesting finding that was not part of the hypotheses in the present research is that the third person plural category correlated positively with all four negative emotions categories, suggesting that people using for example ‘they’ express more negative emotions (pg 87)
    • In line with social identity theory (Tajfel & Turner, 1986), we also observe linguistic adaption to the group. Hence, our results indicate that processes of identity formation may take place online. (pg 87)
  • Me, My Echo Chamber, and I: Introspection on Social Media Polarization
    • Homophily — our tendency to surround ourselves with others who share our perspectives and opinions about the world — is both a part of human nature and an organizing principle underpinning many of our digital social networks. However, when it comes to politics or culture, homophily can amplify tribal mindsets and produce “echo chambers” that degrade the quality, safety, and diversity of discourse online. While several studies have empirically proven this point, few have explored how making users aware of the extent and nature of their political echo chambers influences their subsequent beliefs and actions. In this paper, we introduce Social Mirror, a social network visualization tool that enables a sample of Twitter users to explore the politically-active parts of their social network. We use Social Mirror to recruit Twitter users with a prior history of political discourse to a randomized experiment where we evaluate the effects of different treatments on participants’ i) beliefs about their network connections, ii) the political diversity of who they choose to follow, and iii) the political alignment of the URLs they choose to share. While we see no effects on average political alignment of shared URLs, we find that recommending accounts of the opposite political ideology to follow reduces participants’ beliefs in the political homogeneity of their network connections but still enhances their connection diversity one week after treatment. Conversely, participants who enhance their belief in the political homogeneity of their Twitter connections have less diverse network connections 2-3 weeks after treatment. We explore the implications of these disconnects between beliefs and actions on future efforts to promote healthier exchanges in our digital public spheres.
  • What We Read, What We Search: Media Attention and Public Attention Among 193 Countries
    • We investigate the alignment of international attention of news media organizations within 193 countries with the expressed international interests of the public within those same countries from March 7, 2016 to April 14, 2017. We collect fourteen months of longitudinal data of online news from Unfiltered News and web search volume data from Google Trends and build a multiplex network of media attention and public attention in order to study its structural and dynamic properties. Structurally, the media attention and the public attention are both similar and different depending on the resolution of the analysis. For example, we find that 63.2% of the country-specific media and the public pay attention to different countries, but local attention flow patterns, which are measured by network motifs, are very similar. We also show that there are strong regional similarities with both media and public attention that is only disrupted by significantly major worldwide incidents (e.g., Brexit). Using Granger causality, we show that there are a substantial number of countries where media attention and public attention are dissimilar by topical interest. Our findings show that the media and public attention toward specific countries are often at odds, indicating that the public within these countries may be ignoring their country-specific news outlets and seeking other online sources to address their media needs and desires.
  • “You are no Jack Kennedy”: On Media Selection of Highlights from Presidential Debates
    • Our findings indicate that there exist signals in the textual information that untrained humans do not find salient. In particular, highlights are locally distinct from the speaker’s previous turn, but are later echoed more by both the speaker and other participants (Conclusions)
      • This sounds like dimension reduction and alignment
  • Algorithms, bots, and political communication in the US 2016 election – The challenge of automated political communication for election law and administration
    • Philip N. Howard (Scholar)
    • Samuel C. Woolley (Scholar)
    • Ryan Calo (Scholar)
    • Political communication is the process of putting information, technology, and media in the service of power. Increasingly, political actors are automating such processes, through algorithms that obscure motives and authors yet reach immense networks of people through personal ties among friends and family. Not all political algorithms are used for manipulation and social control however. So what are the primary ways in which algorithmic political communication—organized by automated scripts on social media—may undermine elections in democracies? In the US context, what specific elements of communication policy or election law might regulate the behavior of such “bots,” or the political actors who employ them? First, we describe computational propaganda and define political bots as automated scripts designed to manipulate public opinion. Second, we illustrate how political bots have been used to manipulate public opinion and explain how algorithms are an important new domain of analysis for scholars of political communication. Finally, we demonstrate how political bots are likely to interfere with political communication in the United States by allowing surreptitious campaign coordination, illegally soliciting either contributions or votes, or violating rules on disclosure.
  • Ok, back to getting HTTPClient posts to play with PHP cross domain
  • Maybe I have to make a proxy?
    • Using the proxying support in webpack’s dev server we can highjack certain URLs and send them to a backend server. We do this by passing a file to --proxy-config
    • Well, that fixes the need to have all the server options set, but the post still doesn’t send data. But since this is the Right way to do things, here’s the steps:
    • To proxy localhost:4200/uli -> localhost:80/uli
      • Create a proxy.conf.json file in the same directory as package.json
        {
          "/uli": {
            "target": "http://localhost:80",
            "secure": false
          }
        }

        This will cause any explicit request to localhost:4200/uli to be mapped to localhost:80/uli and appear that they are coming from localhost:80/uli

      • Set the npm start command in the package.json file to read as
        "scripts": {
          "start": "ng serve --proxy-config proxy.conf.json",
          ...
        },

        Start with “npm start”, rather than “ng serve”

      • Call from Angular like this:
        this.http.post('http://localhost:4200/uli/script.php', payload, httpOptions)
      • Here’s the PHP code (script.php): it takes POST and GET input and feeds it back with some information about the source :
        function getBrowserInfo(){
             $browserData = array();
             $ip = htmlentities($_SERVER['REMOTE_ADDR']);
             $browser = htmlentities($_SERVER['HTTP_USER_AGENT']);
             $referrer = "No Referrer";
             if(isset($_SERVER['HTTP_REFERER'])) {
                 //do what you need to do here if it's set
                 $referrer = htmlentities($_SERVER['HTTP_REFERER']);         if($referrer == ""){
                     $referrer = "No Referrer";
                 }
             }
             $browserData["ipAddress"] = $ip;
             $browserData["browser"] = $browser;
             $browserData["referrer"] = $referrer;
             return $browserData;
         }
         function getPostInfo(){
             $postInfo = array();
             foreach($_POST as $key => $value) {
                if(strlen($value) < 10000) {               $postInfo[$key] = $value;           }else{               $postInfo[$key] = "string too long";           }       }       return $postInfo;   }   function getGetInfo(){       $getInfo = array();       foreach($_GET as $key => $value) {
                if(strlen($value) < 10000) {
                    $getInfo[$key] = $value;
                }else{
                    $getInfo[$key] = "string too long";
                }
            }
            return $getInfo;
        }
        
        /**************************** MAIN ********************/
        $toReturn = array();
        $toReturn['getPostInfo'] = getPostInfo();
        $toReturn['getGetInfo'] = getGetInfo();
        $toReturn['browserInfo'] = getBrowserInfo();
        $toReturn['time'] = date("h:i:sa");
        $jstr =  json_encode($toReturn);
        echo($jstr);
      • And it arrives at localhost:80/uli/script.php. The following is the javascript console of the Angular CLI code running on localhost:4200
        {getPostInfo: Array(0), getGetInfo: {…}, browserInfo: {…}, time: "05:17:16pm"}
        browserInfo:
        	browser:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
        	ipAddress:"127.0.0.1"
        	referrer:"http://localhost:4200/"
        getGetInfo:
        	message:"{"title":"foo","body":"bar","userId":1}"
        getPostInfo:[]
        time:"05:17:16pm"
        
      • Got the pieces parsing in @Component and displaying, so the round trip is done. Wan’t expecting to wind up using GET, but until I can figure out what the deal is with POST, that’s what it’s going to be. Here are the two methods that send and then parse the message:
        doGet(event) {
          let payload = {
            title: 'foo',
            body: 'bar',
            userId: 1
          };
          let message = 'message='+encodeURIComponent(JSON.stringify(payload));
          let target = 'http://localhost:4200/uli/script.php?';
        
          //this.http.get(target+'title=\'my title\'&body=\'the body\'&userId=1')
          this.http.get(target+message)
            .subscribe((data) => {
              console.log('Got some data from backend ', data);
              this.extractMessage(data, "getGetInfo");
            }, (error) => {
              console.log('Error! ', error);
            });
        }
        
        extractMessage(obj, name: string){
          let item = obj[name];
          try {
            if (item) {
              let mstr = item.message;
              this.mobj = JSON.parse(mstr);
            }
          }catch(err){
            this.mobj = {};
            this.mobj["message"] = "Error extracting 'message' from ["+name+"]";
          }
          this.mkeys = Object.keys(this.mobj);
        }
      • And here’s the html code: html
      • Here’s a screenshot of everything working: PostGetTest

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.