Category Archives: Angular

Phil 8.10.18

7:00 – ASRC MKT

  • Finished the first pass through the SASO slides. Need to start working on timing (25 min + 5 min questions)
  • Start on poster (A0 size)
  • Sent Wayne a note to get permission for 899
  • Started setting up laptop. I hate this part. Google drive took hours to synchronize
    • Java
    • Python/Nvidia/Tensorflow
    • Intellij
    • Visual Studio
    • MikTex
    • TexStudio
    • Xampp
    • Vim
    • TortoiseSVN
    • WinSCP
    • 7-zip
    • Creative Cloud
      • Acrobat
      • Reader
      • Illustrator
      • Photoshop
    • Microsoft suite
    • Express VPN

Phil 7.27.18

Ted Underwood

  • my research is as much about information science as literary criticism. I’m especially interested in applying machine learning to large digital collections
  • Git repo with code for upcoming book: Distant Horizons: Digital Evidence and Literary Change
  • Do topic models warp time?
    • The key observation I wanted to share is just that topic models produce a kind of curved space when applied to long timelines; if you’re measuring distances between individual topic distributions, it may not be safe to assume that your yardstick means the same thing at every point in time. This is not a reason for despair: there are lots of good ways to address the distortion. The mathematics of cosine distance tend to work better if you average the documents first, and then measure the cosine between the averages (or “centroids”).
  • The Historical Significance of Textual Distances
    • Measuring similarity is a basic task in information retrieval, and now often a building-block for more complex arguments about cultural change. But do measures of textual similarity and distance really correspond to evidence about cultural proximity and differentiation? To explore that question empirically, this paper compares textual and social measures of the similarities between genres of English-language fiction. Existing measures of textual similarity (cosine similarity on tf-idf vectors or topic vectors) are also compared to new strategies that use supervised learning to anchor textual measurement in a social context.

7:00 – 8:00 ASRC MKT

  • Continued on slides. I think I have the basics. Need to start looking for pictures
  • Sent response to the SASO folks about who’s presenting what.

9:00 – ASRC IRAD

Phil 6.27.18

7:00 – 12:00 ASRC MKT

  • Print out documents! Done. Got passport drive too.
  • Need to write an extractor that lets the user navigate the xml file containing influences of selected agents. This could be a sample-by sample network. Maybe two modes?
    • Select an agent and see all the other agents come in and out of influcene
    • Select an number of agents and only watch the mutual influence.
    • There is an integrated JavaFX charts that I could use, or it could be an uploaded webapp? JavaFX would be easier in the short term, but a webapp would help more with JuryRoom…
    • Another option would be Python, since that’s where the LSTM code will live.
    • On the whole, two days before leaving on travel is probably the wrong time to start coding
  • Fixed a bug in the xml file generation
  • copied the new jar file onto the thumb drive
  • copied the xml file onto the thumb drive

12:00 – 4:00 ASRC A2P

  • Pomoting things to QA – done! Or at least, up to date with the excel files

Phil 6.20.18

7:00 – 9:00 2:00 – 5:00 ASRC MKT

  • Redo doodle for all of August – done
  • Schooling Fish May Offer Insights Into Networked Neurons
    • Iain Couzin is deciphering the rules that govern group behavior. The results might provide a fresh perspective on how networks of neurons work together.
  • City arts and lectures: The New Science Of Psychedelics With Michael Pollan
    • Psychedelics reduce the section of the brain that have to do with the sense of self. Pollan thinks that this also happens with certain types of rhythmic music and in crowd situations. This could be related to stampedes and flocking.
    • LSD May Chip Away at the Brain’s “Sense of Self” Network
      • Brain imaging suggests LSD’s consciousness-altering traits may work by hindering some brain networks and boosting overall connectivity
  • Add an attractor scalar for agents that’s normally zero. A vector to each agent within the SIH is calculated and scaled by the attractor scalar. That vector is then added to the direction vector to the agent – done?
  • Remove the heading influence based on site – done
  • Add a white circle to the center of the agent that is the size of the attraction scalar. Done
  • Add a ‘site trajectory’ to the spreadsheet that will have the site lists (and their percentage?)
  • Worked on A2P white paper with Aaron.
  • Worked on a response to Dr. Li’s response

ASRC IRAD 9:00 – 2:00

  • Mind meld with Bob
    • Revisit Yarn
    • Excel stuff?
    • Connect to AWS using bastion. Look in FoxyProxy how to. I need certs
    • Drop on rabbit to deploy to CI and QA and NESDIS  ONE (production)
    • Don’t want sensitive information in Git. We use sharepoint instead
    • Notes and screenshots in document.

Phil 6.19.18

7:00 – 9:00, 4:00 – 5:00 ASRC MKT

  • Here’s a list of organizations that are mobilizing to help immigrant children separated from their families
  • SASO trip
  • Rebuilt all the binaries, now I need to put them on the thumb drive – done
  • Added knobs to the implications slide. They sit next to the dimension and SIH lines. I realize that my slide deck is becoming a physical version of a memory palace.
  • Continuing Irrational Exuberance, though feeling like I should be reading Axelrod. Bring Evolution of Cooperation on the flight?
  • Naive Diversification Strategies in Defined Contribution Saving Plans
    • There is a worldwide trend toward defined contribution saving plans and growing interest in privatized social security plans. In both environments, individuals are given some responsibility to make their own asset allocation decisions, raising concerns about how well they do at this task. This paper investigates one aspect of the task, namely diversification. We show that many investors have very naive notions about diversification. For example, some investors follow what we call the 1/n strategy: they divide their contributions evenly across the funds offered in the plan. When this strategy (or others only slightly more sophisticated) is used, the assets chosen depend greatly on the make-up of the funds offered in the plan. We find evidence of naive diversification strategies both in experiments using employees at the University of California and the actual behavior of participants in a wide range of savings plans. In particular, we find the proportion of the assets the participants invest in stocks depends strongly on the proportion of stock funds in the plan. The results raise very serious questions about how privatized social security systems should be designed, questions that would be ignored in most economic analyses.
    • This is very much a dimension reduction exercise.
  • A2P maintenance proposal

9:00 – 4:00 ASRC A2P

  • Coming up to speed on the Angular interface
    • Logging into CI and QA
    • Dashboard configurations

Phil 5.2.18

7:00 – 4:30 ASRC MKT

    • I am going to start calling runaway echo chambers Baudrillardian Stampedes: https://en.wikipedia.org/wiki/Simulacra_and_Simulation
    • GECCO 2018 paper list is full of swarming optimizers
    • CORNELL NEWSROOM is a large dataset for training and evaluating summarization systems. It contains 1.3 million articles and summaries written by authors and editors in the newsrooms of 38 major publications. The summaries are obtained from search and social metadata between 1998 and 2017 and use a variety of summarization strategies combining extraction and abstraction.
    • More Ultimate Angular
      • Template Fundamentals (interpolation – #ref)
    • Now that I have my corpora, time to figure out how to build an embedding
    • Installing gensim
      • By now, gensim is—to my knowledge—the most robust, efficient and hassle-free piece of software to realize unsupervised semantic modelling from plain text. It stands in contrast to brittle homework-assignment-implementations that do not scale on one hand, and robust java-esque projects that take forever just to run “hello world”.
      • Big install. Didn’t break TF, which is nice
    • How to Develop Word Embeddings in Python with Gensim
      • Following the tutorial. Here’s a plot! W2V
    • I need to redo the parser so that each file is one sentence.
      • sentences are strings that begin with a [CR] or [SPACE] + [WORD] and end with [WORD] + [.] or [“]
      • a [CR] preceded by anything other than a [.] or [“] is the middle of  a sentance
      • A fantastic regex tool! https://regex101.com/
        • regex = re.compile(r"([-!?\.]\"|[!?\.])")
      • After running into odd edge cases, I decided to load each book as a single string, parse it, then write out the individual lines. Works great except the last step, where I can’t seem to iterate over an array of strings. Calling it a day

 

Phil 5.1.18

7:00 – 4:30 ASRC MKT

  • Applications of big social media data analysis: An overview
    • Over the last few years, online communication has moved toward user-driven technologies, such as online social networks (OSNs), blogs, online virtual communities, and online sharing platforms. These social technologies have ushered in a revolution in user-generated data, online global communities, and rich human behavior-related content. Human-generated data and human mobility patterns have become important steps toward developing smart applications in many areas. Understanding human preferences is important to the development of smart applications and services to enable such applications to understand the thoughts and emotions of humans, and then act smartly based on learning from social media data. This paper discusses the role of social media data in comprehending online human data and in consequently different real applications of SM data for smart services are executed.
  • Explainable, Interactive Deep Learning
    • Recently, deep learning has been advancing the state of the art in artificial intelligence to yet another level, and humans are relying more and more on the outputs generated by artificial intelligence techniques than ever before. However, even with such unprecedented advancements, the lack of interpretability on the decisions made by deep learning models and no control over their internal processes act as a major drawback when utilizing them to critical decision-making processes such as precision medicine and law enforcement. In response, efforts are being made to make deep learning interpretable and controllable by humans. In this paper, we review recent studies relevant to this direction and discuss potential challenges and future research directions.
  • Building successful online communities: Evidence-based social design (book review)
    • In Building Successful Online Communities (2012), Robert Kraut, Paul Resnick, and their collaborators set out to draw links between the design of socio-technical systems with findings from social psychology and economics. Along the way, they set out a vision for the role of social sciences in the design of systems like mailing lists, discussion forums, wikis, and social networks, offering a way that behavior on those platforms might inform our understanding of human behavior.
  • Since I’ve forgotten my Angular stuff, reviewing UltimateAngular, Angular Fundamentals course. Finished the ‘Getting Started’ section
  • Strip out Guttenburg text from corpora – done!

Phil 4.30.18

7:00 – 4:30 ASRC MKT

  • Some new papers from ICLR 2018
  • Need to write up a quick post for communicating between Angular and a (PHP) server, with an optional IntelliJ configuration section
  • JuryRoom this morning and then GANs + Agents this afternoon?
  • Next steps for JuryRoom
    • Start up the AngularPro course
    • Set up PHP access to DB, returning JSON objects
  • Starting Agent/GAN project
    • Need to set up an ACM paper to start dumping things into – done.
    • Looking for a good source for Jack London. Gutenberg looks nice, but there is a no-scraping rule, so I guess, we’ll do this by hand…
    • We will need to check for redundant short stories
    • We will need to strip the front and back matter that pertains to project Gutenburg
      • *** START OF THIS PROJECT GUTENBERG EBOOK BROWN WOLF AND OTHER JACK ***
      • *** END OF THIS PROJECT GUTENBERG EBOOK BROWN WOLF AND OTHER JACK ***
  • Fika: Accessibility at the Intersection of Users and Data
    • Nice talk and followup discussion with Dr. Hernisa Kacorri, who’s combining machine learning and HCC
      • My research goal is to build technologies that address real-world problems by integrating data-driven methods and human-computer interaction. I am interested in investigating human needs and challenges that may benefit from advancements in artificial intelligence. My focus is both in building new models to address these challenges and in designing evaluation methodologies that assess their impact. Typically my research involves application of machine learning and analytics research to benefit people with disabilities, especially assistive technologies that model human communication and behavior such as sign language avatars and independent mobility for the blind.

Phil 4.27.18

7:00 – 4:00 ASRC MKT

  • Call Charlestown about getting last two years of payments – done. Left a message
  • Get parking from StubHub
  • I saw James Burnham’s interview on the Daly Show last night. Roughly, I think that his thoughts on how humans haven’t changed, but our norms and practices is true. Based on listening to him talk, I think he’s more focussed on the symptoms than the cause. The question that he doesn’t seem to be asking is “why did civilization emerge when it did?”, and “why does it seem to be breaking now?”
    Personally, I think it’s tied up with communication technology. Slow communication systems like writing, mail, and the printing press lead to civilization. Rapid, frictionless forms of communication from radio to social media disrupt this process by changing how we define, perceive and trust our neighbors. The nice thing is that if technology is the critical element, then technology can be adjusted. Not that it’s easier, but it’s probably easier than changing humans.
  • Continuing From I to We: Group Formation and Linguistic Adaption in an Online Xenophobic Forum. Done and posted in Phlog
  • Tweaking the Angular and PHP code.
  • I got the IntelliJ debugger to connect to the Apache PHP server! Here’s the final steps. Pay particular attention to the highlighted areas:
    • File->Settings->Languages & Frameworks->PHP->Debug Debug1
    • Validate: Debug2
  • Objects are now coming back in the same way, so no parsing on the Angular side
  • Sprint planning

Phil 4.26.18

Too much stuff posted yesterday, so I’m putting Kate Starbird’s new paper here:

  • Ecosystem or Echo-System? Exploring Content Sharing across Alternative Media Domains
    • This research examines the competing narratives about the role and function of Syria Civil Defence, a volunteer humanitarian organization popularly known as the White Helmets, working in war-torn Syria. Using a mixed-method approach based on seed data collected from Twitter, and then extending out to the websites cited in that data, we examine content sharing practices across distinct media domains that functioned to construct, shape, and propagate these narratives. We articulate a predominantly alternative media “echo-system” of websites that repeatedly share content about the White Helmets. Among other findings, our work reveals a small set of websites and authors generating content that is spread across diverse sites, drawing audiences from distinct communities into a shared narrative. This analysis also reveals the integration of government funded media and geopolitical think tanks as source content for anti-White Helmets narratives. More broadly, the analysis demonstrates the role of alternative newswire-like services in providing content for alternative media websites. Though additional work is needed to understand these patterns over time and across topics, this paper provides insight into the dynamics of this multi-layered media ecosystem.

7:00 – 5:00 ASRC MKT

  • Referencing for Aanton at 5:00
  • Call Charlestown about getting last two years of payments
  • Benjamin D. Horne, Sara Khedr, and Sibel Adali. “Sampling the News Producers: A Large News and Feature Data Set for the Study of the Complex Media Landscape” ICWSM 2018
  • Continuing From I to We: Group Formation and Linguistic Adaption in an Online Xenophobic Forum
  • Anchor-Free Correlated Topic Modeling
    • In topic modeling, identifiability of the topics is an essential issue. Many topic modeling approaches have been developed under the premise that each topic has an anchor word, which may be fragile in practice, because words and terms have multiple uses; yet it is commonly adopted because it enables identifiability guarantees. Remedies in the literature include using three- or higher-order word co-occurence statistics to come up with tensor factorization models, but identifiability still hinges on additional assumptions. In this work, we propose a new topic identification criterion using second order statistics of the words. The criterion is theoretically guaranteed to identify the underlying topics even when the anchor-word assumption is grossly violated. An algorithm based on alternating optimization, and an efficient primal-dual algorithm are proposed to handle the resulting identification problem. The former exhibits high performance and is completely parameter-free; the latter affords up to 200 times speedup relative to the former, but requires step-size tuning and a slight sacrifice in accuracy. A variety of real text copora are employed to showcase the effectiveness of the approach, where the proposed anchor-free method demonstrates substantial improvements compared to a number of anchor-word based approaches under various evaluation metrics.
  • Cleaning up the Angular/PHP example. Put on GitHub?

Phil 4.25.18

7:00 – 3:30 ASRC MKT

  • Google’s Workshop on AI/ML Research and Practice in India:
    Ganesh Ramakrishnan (IIT Bombay) presented research on human assisted machine learning.
  • From I to We: Group Formation and Linguistic Adaption in an Online Xenophobic Forum
    • Much of identity formation processes nowadays takes place online, indicating that intergroup differentiation may be found in online communities. This paper focuses on identity formation processes in an open online xenophobic, anti-immigrant, discussion forum. Open discussion forums provide an excellent opportunity to investigate open interactions that may reveal how identity is formed and how individual users are influenced by other users. Using computational text analysis and Linguistic Inquiry Word Count (LIWC), our results show that new users change from an individual identification to a group identification over time as indicated by a decrease in the use of “I” and increase in the use of “we”. The analyses also show increased use of “they” indicating intergroup differentiation. Moreover, the linguistic style of new users became more similar to that of the overall forum over time. Further, the emotional content decreased over time. The results indicate that new users on a forum create a collective identity with the other users and adapt to them linguistically.
    • Social influence is broadly defined as any change – emotional, behavioral, or attitudinal – that has its roots in others’ real or imagined presence (Allport, 1954). (pg 77)
    • Regardless of why an individual displays an observable behavioral change that is in line with group norms, social identification with a group is the basis for the change. (pg 77)
    • In social psychological terms, a group is defined as more than two people that share certain goals (Cartwright & Zander, 1968). (pg 77)
    • Processes of social identification, intergroup differentiation and social influence have to date not been studied in online forums. The aim of the present research is to fill this gap and provide information on how such processes can be studied through language used on the forum. (pg 78)
    • The popularity of social networking sites has increased immensely during the last decade. At the same time, offline socializing has shown a decline (Duggan & Smith, 2013). Now, much of the socializing actually takes place online (Ganda, 2014). In order to be part of an online community, the individual must socialize with other users. Through such socializing, individuals create self-representations (Enli & Thumim, 2012). Hence, the processes of identity formation, may to a large extent take place on the Internet in various online forums. (pg 78)
    • For instance, linguistic analyses of American Nazis have shown that use of third person plural pronouns (they, them, their) is the single best predictor of extreme attitudes (Pennebaker & Chung, 2008). (pg 79)
    • Because language can be seen as behavior (Fiedler, 2008), it may be possible to study processes of social influence through linguistic analysis. Thus, our second hypothesis is that the linguistic style of new users will become increasingly similar to the linguistic style of the overall forum over time (H2). (pg 79)
    • This indicates that the content of the posts in an online forum may also change over time as arguments become more fine-tuned and input from both supporting and contradicting members are integrated into an individual’s own beliefs. This is likely to result (linguistically) in an increase in indicators of cognitive complexity. Hence, we hypothesize that the content of the posts will change over time, such that indicators of complex thinking will increase (H3a). (pg 80)
      • I’m not sure what to think about this. I expect from dimension reduction, that as the group becomes more aligned, the overall complex thinking will reduce, and the outliers will leave, at least in the extreme of a stampede condition.
    • This result indicates that after having expressed negativity in the forum, the need for such expressions should decrease. Hence, we expect that the content of the posts will change such that indicators of negative emotions will decrease, over time (H3b). (pg 80)
    • the forum is presented as a “very liberal forum”, where people are able to express their opinions, whatever they may be. This “extreme liberal” idea implies that there is very little censorship the forum is presented as a “very liberal forum”, where people are able to express their opinions, whatever they may be. This “extreme liberal” idea implies that there is very little censorship, which has resulted in that the forum is highly xenophobic. Nonetheless, due to its liberal self-presentation, the xenophobic discussions are not unchallenged. For example, also anti-racist people join this forum in order to challenge individuals with xenophobic attitudes. This means that the forum is not likely to function as a pure echo chamber, because contradicting arguments must be met with own arguments. Hence, individuals will learn from more experienced users how to counter contradicting arguments in a convincing way. Hence, they are likely to incorporate new knowledge, embrace input and contribute to evolving ideas and arguments. (pg 81)
      • Open debate can lead to the highest level of polarization (M&D)
      • There isn’t diverse opinion. The conversation is polarized, with opponents pushing towards the opposite pole. The question I’d like to see answered is has extremism increased in the forum?
    • Natural language analyses of anonymous social media forums also circumvent social desirability biases that may be present in traditional self-rating research, which is a particular important concern in relation to issues related to outgroups (Maass, Salvi, Arcuri, & Semin, 1989; von Hippel, Sekaquaptewa, & Vargas, 1997, 2008). The to-be analyzed media uses “aliases”, yielding anonymity of the users and at the same time allow us to track individuals over time and analyze changes in communication patterns. (pg 81)
      • After seeing “Ready Player One”, I also wonder if the aliases themselves could be looked at using an embedding space built from the terms used by the users? Then you get distance measurements, t-sne projections, etc.
    • Linguistic Inquiry Word Count (LIWC; Pennebaker et al., 2007; Chung & Pennebaker, 2007; Pennebaker, 2011b; Pennebaker, Francis, & Booth, 2001) is a computerized text analysis program that computes a LIWC score, i.e., the percentage of various language categories relative to the number of total words (see also www.liwc.net). (pg 81)
      • LIWC2015 ($90) is the gold standard in computerized text analysis. Learn how the words we use in everyday language reveal our thoughts, feelings, personality, and motivations. Based on years of scientific research, LIWC2015 is more accurate, easier to use, and provides a broader range of social and psychological insights compared to earlier LIWC versions
    • Figure 1c shows words overrepresented in later posts, i.e. words where the usage of the words correlates positively with how long the users has been active on the forum. The words here typically lack emotional content and are indicators of higher complexity in language. Again, this analysis provides preliminary support for the idea that time on the forum is related to more complex thinking, and less emotionality.
      • WordCloud
    • The second hypothesis was that the linguistic style of new users would become increasingly similar to other users on the forum over time. This hypothesis is evaluated by first z-transforming each LIWC score, so that each has a mean value of zero and a standard deviation of one. Then we measure how each post differs from the standardized values by summing the absolute z-values over all 62 LIWC categories from 2007. Thus, low values on these deviation scores indicate that posts are more prototypical, or highly similar, to what other users write. These deviation scores are analyzed in the same way as for Hypothesis 1 (i.e., by correlating each user score with the number of days on the forum, and then t-testing whether the correlations are significantly different from zero). In support of the hypothesis, the results show an increase in similarity, as indicated by decreasing deviation scores (Figure 2). The mean correlation coefficient between this measure and time on the forum was -.0086, which is significant, t(11749) = -3.77, p < 0.001. (pg 85)
      • ForumAlignmentI think it is reasonable to consider this a measure of alignment
    • Because individuals form identities online and because we see this in the use of pronouns, we also expected to see tendencies of social influence and adaption. This effect was also found, such that individuals’ linguistic style became increasingly similar to other users’ linguistic style over time. Past research has shown that accommodation of communication style occurs automatically when people connect to people or groups they like (Giles & Ogay 2007; Ireland et al., 2011), but also that similarity in communicative style functions as cohesive glue within a group (Reid, Giles, & Harwood, 2005). (pg 86)
    • Still, the results could not confirm an increase in cognitive complexity. It is difficult to determine why this was not observed even though a general trend to conform to the linguistic style on the forum was observed. (pg 87)
      • This is what I would expect. As alignment increases, complexity, as expressed by higher dimensional thinking should decrease.
    • This idea would also be in line with previous research that has shown that expressing oneself decreases arousal (Garcia et al., 2016). Moreover, because the forum is not explicitly racist, individuals may have simply adapted to the social norms on the forum prescribing less negative emotional displays. Finally, a possible explanation for the decrease in negative emotional words might be that users who are very angry leave the forum, because of its non-racist focus, and end up in more hostile forums. An interesting finding that was not part of the hypotheses in the present research is that the third person plural category correlated positively with all four negative emotions categories, suggesting that people using for example ‘they’ express more negative emotions (pg 87)
    • In line with social identity theory (Tajfel & Turner, 1986), we also observe linguistic adaption to the group. Hence, our results indicate that processes of identity formation may take place online. (pg 87)
  • Me, My Echo Chamber, and I: Introspection on Social Media Polarization
    • Homophily — our tendency to surround ourselves with others who share our perspectives and opinions about the world — is both a part of human nature and an organizing principle underpinning many of our digital social networks. However, when it comes to politics or culture, homophily can amplify tribal mindsets and produce “echo chambers” that degrade the quality, safety, and diversity of discourse online. While several studies have empirically proven this point, few have explored how making users aware of the extent and nature of their political echo chambers influences their subsequent beliefs and actions. In this paper, we introduce Social Mirror, a social network visualization tool that enables a sample of Twitter users to explore the politically-active parts of their social network. We use Social Mirror to recruit Twitter users with a prior history of political discourse to a randomized experiment where we evaluate the effects of different treatments on participants’ i) beliefs about their network connections, ii) the political diversity of who they choose to follow, and iii) the political alignment of the URLs they choose to share. While we see no effects on average political alignment of shared URLs, we find that recommending accounts of the opposite political ideology to follow reduces participants’ beliefs in the political homogeneity of their network connections but still enhances their connection diversity one week after treatment. Conversely, participants who enhance their belief in the political homogeneity of their Twitter connections have less diverse network connections 2-3 weeks after treatment. We explore the implications of these disconnects between beliefs and actions on future efforts to promote healthier exchanges in our digital public spheres.
  • What We Read, What We Search: Media Attention and Public Attention Among 193 Countries
    • We investigate the alignment of international attention of news media organizations within 193 countries with the expressed international interests of the public within those same countries from March 7, 2016 to April 14, 2017. We collect fourteen months of longitudinal data of online news from Unfiltered News and web search volume data from Google Trends and build a multiplex network of media attention and public attention in order to study its structural and dynamic properties. Structurally, the media attention and the public attention are both similar and different depending on the resolution of the analysis. For example, we find that 63.2% of the country-specific media and the public pay attention to different countries, but local attention flow patterns, which are measured by network motifs, are very similar. We also show that there are strong regional similarities with both media and public attention that is only disrupted by significantly major worldwide incidents (e.g., Brexit). Using Granger causality, we show that there are a substantial number of countries where media attention and public attention are dissimilar by topical interest. Our findings show that the media and public attention toward specific countries are often at odds, indicating that the public within these countries may be ignoring their country-specific news outlets and seeking other online sources to address their media needs and desires.
  • “You are no Jack Kennedy”: On Media Selection of Highlights from Presidential Debates
    • Our findings indicate that there exist signals in the textual information that untrained humans do not find salient. In particular, highlights are locally distinct from the speaker’s previous turn, but are later echoed more by both the speaker and other participants (Conclusions)
      • This sounds like dimension reduction and alignment
  • Algorithms, bots, and political communication in the US 2016 election – The challenge of automated political communication for election law and administration
    • Philip N. Howard (Scholar)
    • Samuel C. Woolley (Scholar)
    • Ryan Calo (Scholar)
    • Political communication is the process of putting information, technology, and media in the service of power. Increasingly, political actors are automating such processes, through algorithms that obscure motives and authors yet reach immense networks of people through personal ties among friends and family. Not all political algorithms are used for manipulation and social control however. So what are the primary ways in which algorithmic political communication—organized by automated scripts on social media—may undermine elections in democracies? In the US context, what specific elements of communication policy or election law might regulate the behavior of such “bots,” or the political actors who employ them? First, we describe computational propaganda and define political bots as automated scripts designed to manipulate public opinion. Second, we illustrate how political bots have been used to manipulate public opinion and explain how algorithms are an important new domain of analysis for scholars of political communication. Finally, we demonstrate how political bots are likely to interfere with political communication in the United States by allowing surreptitious campaign coordination, illegally soliciting either contributions or votes, or violating rules on disclosure.
  • Ok, back to getting HTTPClient posts to play with PHP cross domain
  • Maybe I have to make a proxy?
    • Using the proxying support in webpack’s dev server we can highjack certain URLs and send them to a backend server. We do this by passing a file to --proxy-config
    • Well, that fixes the need to have all the server options set, but the post still doesn’t send data. But since this is the Right way to do things, here’s the steps:
    • To proxy localhost:4200/uli -> localhost:80/uli
      • Create a proxy.conf.json file in the same directory as package.json
        {
          "/uli": {
            "target": "http://localhost:80",
            "secure": false
          }
        }

        This will cause any explicit request to localhost:4200/uli to be mapped to localhost:80/uli and appear that they are coming from localhost:80/uli

      • Set the npm start command in the package.json file to read as
        "scripts": {
          "start": "ng serve --proxy-config proxy.conf.json",
          ...
        },

        Start with “npm start”, rather than “ng serve”

      • Call from Angular like this:
        this.http.post('http://localhost:4200/uli/script.php', payload, httpOptions)
      • Here’s the PHP code (script.php): it takes POST and GET input and feeds it back with some information about the source :
        function getBrowserInfo(){
             $browserData = array();
             $ip = htmlentities($_SERVER['REMOTE_ADDR']);
             $browser = htmlentities($_SERVER['HTTP_USER_AGENT']);
             $referrer = "No Referrer";
             if(isset($_SERVER['HTTP_REFERER'])) {
                 //do what you need to do here if it's set
                 $referrer = htmlentities($_SERVER['HTTP_REFERER']);         if($referrer == ""){
                     $referrer = "No Referrer";
                 }
             }
             $browserData["ipAddress"] = $ip;
             $browserData["browser"] = $browser;
             $browserData["referrer"] = $referrer;
             return $browserData;
         }
         function getPostInfo(){
             $postInfo = array();
             foreach($_POST as $key => $value) {
                if(strlen($value) < 10000) {               $postInfo[$key] = $value;           }else{               $postInfo[$key] = "string too long";           }       }       return $postInfo;   }   function getGetInfo(){       $getInfo = array();       foreach($_GET as $key => $value) {
                if(strlen($value) < 10000) {
                    $getInfo[$key] = $value;
                }else{
                    $getInfo[$key] = "string too long";
                }
            }
            return $getInfo;
        }
        
        /**************************** MAIN ********************/
        $toReturn = array();
        $toReturn['getPostInfo'] = getPostInfo();
        $toReturn['getGetInfo'] = getGetInfo();
        $toReturn['browserInfo'] = getBrowserInfo();
        $toReturn['time'] = date("h:i:sa");
        $jstr =  json_encode($toReturn);
        echo($jstr);
      • And it arrives at localhost:80/uli/script.php. The following is the javascript console of the Angular CLI code running on localhost:4200
        {getPostInfo: Array(0), getGetInfo: {…}, browserInfo: {…}, time: "05:17:16pm"}
        browserInfo:
        	browser:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
        	ipAddress:"127.0.0.1"
        	referrer:"http://localhost:4200/"
        getGetInfo:
        	message:"{"title":"foo","body":"bar","userId":1}"
        getPostInfo:[]
        time:"05:17:16pm"
        
      • Got the pieces parsing in @Component and displaying, so the round trip is done. Wan’t expecting to wind up using GET, but until I can figure out what the deal is with POST, that’s what it’s going to be. Here are the two methods that send and then parse the message:
        doGet(event) {
          let payload = {
            title: 'foo',
            body: 'bar',
            userId: 1
          };
          let message = 'message='+encodeURIComponent(JSON.stringify(payload));
          let target = 'http://localhost:4200/uli/script.php?';
        
          //this.http.get(target+'title=\'my title\'&body=\'the body\'&userId=1')
          this.http.get(target+message)
            .subscribe((data) => {
              console.log('Got some data from backend ', data);
              this.extractMessage(data, "getGetInfo");
            }, (error) => {
              console.log('Error! ', error);
            });
        }
        
        extractMessage(obj, name: string){
          let item = obj[name];
          try {
            if (item) {
              let mstr = item.message;
              this.mobj = JSON.parse(mstr);
            }
          }catch(err){
            this.mobj = {};
            this.mobj["message"] = "Error extracting 'message' from ["+name+"]";
          }
          this.mkeys = Object.keys(this.mobj);
        }
      • And here’s the html code: html
      • Here’s a screenshot of everything working: PostGetTest

Phil 4.24.18

7:00 – 5:00 ASRC MKT

  • Aaron’s ot BoP today
  • Working on JuryRoom, particularly hooking up PHP to Angular
  • Here’s the hello world php app that’s working:
    <?php
    header('Access-Control-Allow-Origin: *');
    echo '{"message": "hello"}';
  • And here’s the Angular side:
    uploadFile(event) {
      const elem = event.target;
      if (elem.files.length > 0) {
        const f0 = elem.files[0];
        console.log(f0);
        const formData = new FormData();
        formData.append('file', f0);
    
        this.http.post('http://localhost/uploadImages/script.php', formData)
          .subscribe((data) => {
    
            const jsonResponse = data.json();
    
            // this.gallery.gotSomeDataFromTheBackend(jsonResponse.file);
    
            console.log('Got some data from backend ', data);
          }, (error) => {
            console.log('Error! ', error);
          });
      }
    }
  • Here’s how to connect to the deployment server for debugging (I hope!). From Importing settings from a server access (deployment) configurationDebugPhpServer
  • Can’t see the post info coming back, so I really need to get the debugger set up to talk to the server. Following these directions: Web Server Debug Validation Dialog. Here’s the dialog with some warnings to be corrected: EnablePhpDebug
  • Note that you HAVE TO RESTART APACHE for any php.ini changes to take
  • Had to Add XDebug Helper Chrome Extension. That helped with the php running in the browser, but not in the call to PHP from angular XDebugHelper
  • Works in Postman, but it doesn’t fire the debugger. Still, at least I know that the data can get to the php. Not sure if angular is sending it. Here’s the postman results: Postman
  • Here’s the debugger view. The data appears to be going up (formData), but it’s not coming back in the echo like it does in postman. I’ve played around with Content-type, and that doesn’t seem to help: Debugger
  • In the network view, we can see that the payload is there: Payload
  • So it must not be getting accepted in the PHP….

Phil 4.20.18

7:00 – ASRC MKT

  • Executing gradient descent on the earth
    • But the important question is: how well does gradient descent perform on the actual earth?
    • This is nice, because it suggests that we can compare GD algorithms on recognizable and visualizable terrains. Terrain locations can have multiple visualizable factors, height and luminance could be additional dimensions
  • Minds is the anti-facebook that pays you for your time
    • In a refreshing change from Facebook, Twitter, Instagram, and the rest of the major platforms, Minds has also retained a strictly reverse-chronological timeline. The core of the Minds experience, though, is that users receive “tokens” when others interact with their posts, or simply by spending time on the platform.
  • Continuing along with the Angular/PHP tutorial here. Nicely, there is also a Git repo
    • Had to add some styling to get the upload button to show
    • The HttpModule is deprecated, but sticking with it for now
    • Will need to connect/verify PHP server within IntelliJ, described here.
    • How to connect Apache, to IntelliJ
  • Installing and Configuring XAMPP with PhpStorm IDE. Don’t forget about deployment path: deploy

Phil 4.13.18

7:00 – ASRC MKT/BD

  • That Politico article on “news deserts” doesn’t really show what it claims to show
    • Its heart is in the right place, and the decline of local news really is a big threat to democratic governance.
  • Firing up the JuryRoom effort again
    • Unsurprisingly, there are updates
    • And a lot of fixing plugins. Big update
    • Ok, back to having PHP and MySQL working. Need to see how to integrate it with the Angular CLI
      • Updated CLI as per stackoverflow
        • In order to update the angular-cli package installed globally in your system, you need to run:

          npm uninstall -g angular-cli
          npm cache clean
          npm install -g @angular/cli@latest
          

          Depending on your system, you may need to prefix the above commands with sudo.

          Also, most likely you want to also update your local project version, because inside your project directory it will be selected with higher priority than the global one:

          rm -rf node_modules
          npm uninstall --save-dev angular-cli
          npm install --save-dev @angular/cli@latest
          npm install
          

          thanks grizzm0 for pointing this out on GitHub.

           

        • Updated my work environment too. Some PHP issues, and the Angular CLI wouldn’t update until I turned on the VPN. Duh.
      • Angular 4 + PHP: Setting Up Angular And Bootstrap – Part 2
    • Back to proposal writing

Phil 4.12.18

7:00 – 5:00 ASRC MKT/BD

  • Downloaded my FB DB today. Honestly, the only thing that seems excessive is the contact information
  • Interactive Semantic Alignment Model: Social Influence and Local Transmission Bottleneck
    • Dariusz Kalociński
    • Marcin Mostowski
    • Nina Gierasimczuk
    • We provide a computational model of semantic alignment among communicating agents constrained by social and cognitive pressures. We use our model to analyze the effects of social stratification and a local transmission bottleneck on the coordination of meaning in isolated dyads. The analysis suggests that the traditional approach to learning—understood as inferring prescribed meaning from observations—can be viewed as a special case of semantic alignment, manifesting itself in the behaviour of socially imbalanced dyads put under mild pressure of a local transmission bottleneck. Other parametrizations of the model yield different long-term effects, including lack of convergence or convergence on simple meanings only.
  • Starting to get back to the JuryRoom app. I need a better way to get the data parts up and running. This tutorial seems to have a minimal piece that works with PHP. That may be for the best since this looks like a solo effort for the foreseeable future
  • Proposal
    • Cut implementation down to proof-of-concept?
    • We are keeping the ASRC format
    • Got Dr. Lee’s contribution
    • And a lot of writing and figuring out of things