Monthly Archives: March 2016

Phil 3.13.16

9:00 – 5:00

  • Data journalism is IR with better affordances?

Still thinking about getting lost. In low information environments, credibility cues and entertainment value can lead to habituation. Habituation can help maintain this process beyond what someone who’s unfamiliar with the situation might draw the line at. Which means that the sense betrayal is higher?

ACM CHIIR Conference Day 1 – Tutorials  User modelling on information retrieval

  • Quantifying performance
  • Practically significant?
  • Statistical significance?
  • User-centered evaluation
    • Measure Users in the wild
      • A/B Testing, etc.
    • User in the lab
  • User performance prediction
    • Record user
    • Create model
    • Calibrate
    • validate
    • Use model to predict performance
  • Cranfield Paradigm – Cyril Cleverdon
    • TREC – paid assessors
    • User satisfaction for retrieval evaluation metrics
    • Discounted Cumulative Gain (probability of document visited WRT rank) can also be normalized WRT an optimal return
    • Expected Reciprocal Rank – pertinence calculation??? Based on the idea that there is one perfect document that is who’s utility is based on the position of the document
    • Average precision <– search for this
  • Diversity, novelty (novelty), tractability
  • Underspecified vs. ambiguous queries
  • Specifications have aspects
  • Ambiguities have interpretations
  • Inferring query intent from reformulations and clicks
  • Ian Soboroff – Mr. TREC
  • Randomization – check animation in slides
  • Bootstrap –
  • Sign test – just for one side or the other of a value. A binomial distribution

Afternoon session

  • Evaluating whole systems
  • Metrics Derived from Query Logs
    • Use the logs o understand user behavior, then…
    • Learn the parameter of the user model from the query logs
  • Incorporating UI
  • User Variance
  • Time
    • Costs in time spent searching
    • Benefits in time well spent
    • Initial Assessment – quickly scan the document first. So what if we could make that more amenable to measuring that effort.
      • Findability
      • Readablity
      • Understandability
      • If the judge has to use tools to find the relevant part of the document and mark it, those biometrics might be usable…
    • Utility Extraction
    • A real user goes through both stages, an Assessor only does step 1, Initial Assessment. But learning can be a third step? It’s certainly the step that would take the most time and require interdocument relationships
    • What about learning how to disambigulating your query?
    • Conceptual leaps???? Is that an information distance issue???
  • Session
    • Time spent on the last clicked document.
    • A session is just based on time (e.g. 30 minutes). TREC is leaving session and going to Task-Based
  • Task
    • What is a Gold-Standard task??
    • Which metrics to use??

Phil 3.11.16

8:00 – VTX

  • Created new versions of the Friday crawl scheduler, one for GOV, one for ORG.
  • The gap between inaccurate viral news stories and the truth is 13 hours, based on this paper: Hoaxy – A Platform for Tracking Online Misinformation
  • Here’s a rough list on why UGC stored in a graph might be the best way to handle the BestPracticesService.
    • Self generating, self correcting information using incentivized contributions (every time a page you contributed to is used, you get money/medals/other…)
    • Graph database, maybe document elements rather than documents
      BPS has its own network, but it connects to doctors and possibly patients (anonymized?) and their symptoms.
    • Would support Results-driven medicine from a variety of interesting dimensions. For example we could calculate the best ‘route’ from symptoms to treatment using A*. Conversely, we could see how far from the optimal some providers are.
    • Because it’s UGC, there can be a robust mechanism for keeping information current (think Wikipedia) as well as handling disputes
    • Could be opened up as its own diagnostic/RDM tool.
    • A graph model allows for easy determination of provenience.
    • A good paper to look at: http://www.mdpi.com/1660-4601/6/2/492/htm. One of the social sites it looked at was Medscape, which seems to be UGC
  • Got the new Rating App mostly done. Still need to look into inbound links
  • Updated the blacklists on everything

Phil 3.10.16

7:00 – 3:30 VTX

  • Today’s thought. Trustworthiness is a state that allows for betrayal.
  • Since it’s pledge week on WAMU, I was listening to KQED this morning, starting around 4:45 am. Somewhere around 5:30(?) they ran an environment section that talked about computer-generated hypotheses. Trying to run that down with no luck.
  • Continuing A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web.
    • End-user–based framework approaches use different methods to allow for the differences between individual end-users for adaptive, interactive, or personalized assessment and ranking of UGC. They utilize computational methods to personalize the ranking and assessment process or give an individual end-user the opportunity to interact with the system, explore content, personally define the expected value, and rank content in accordance with individual user requirements. These approaches can also be categorized in two main groups: human centered approaches, also referred to as interactive and adaptive approaches, and machine-centered approaches, also referred to as personalized approaches. The main difference between interactive and adaptive systems compared to personalized systems is that they do not explicitly or implicitly use users’ previous common actions and activities to assess and rank the content. However, they give users opportunities to interact with the system and explore the content space to find content suited to their requirements.
    • Looks like section 3.1 is the prior research part for the Pertinence Slider Concept.
    • Evaluating the algorithm reveals that enrichment of text (by calling out to
      search engines) outperforms other approaches by using simple syntactic conversion

      • This seems to work, although the dependency on a Google black box is kind of scary. It really makes me wonder what would happen if we analyzed the links created by a search of each sentence (where the subject is contained in the sentence?) would look like ant what we could learn…I took the On The Media retweet of a Google Trends tweet [“Basta” just spiked 2,550% on Google search as @hillaryclinton said #basta during #DemDebate][https://twitter.com/GoogleTrends/status/707756376072843268] and fed that into Google which returned:
        4 results (0.51 seconds)
        Search Results
        Hillary Clinton said 'basta' and America went nuts | Sun ...
        national.suntimes.com/.../7/.../hillary-clinton-basta-cnn-univision-debate/
        9 hours ago - America couldn't get enough of a line Hillary Clinton dropped during Wednesday night's CNN/Univision debate after she ... "Basta" just spiked 2,550% on Google search as @hillaryclinton said #basta during #DemDebate.
        Hillary is Asked If Trump is 'Racist' at Debate, But It Gets ...
        https://www.ijreview.com/.../556789-hillary-was-asked-if-trump-was-raci...
        "Basta" just spiked 2,550% on Google search as @hillaryclinton said #basta during #DemDebate. — GoogleTrends (@GoogleTrends) March 10, 2016.
        Election 2016 | Reuters.com
        live.reuters.com/Event/Election_2016?Page=93
        Reuters
        Happening during tonight's #DemDebate, below are the first three tracks: ... "Basta" just spiked 2,550% on Google search as @hillaryclinton said #basta during # ...
        Maysoon Zayid (@maysoonzayid) | Twitter
        https://twitter.com/maysoonzayid?lang=en
        Maysoon Zayid added,. GoogleTrends @GoogleTrends. "Basta" just spiked 2,550% on Google search as @hillaryclinton said #basta during #DemDebate.
    • Found Facilitating Diverse Political Engagement with the Living Voters Guide, which I think is another study of the Seattle system presented at CSCW in Baltimore. The survey indicates that it has a good focus on bubbles.
    • Encouraging Reading of Diverse Political Viewpoints with a Browser Widget. Possibly more interesting are the papers that cite this…
    • Can you hear me now?: mitigating the echo chamber effect by source position indicatorsDoes offline political segregation affect the filter bubble? An empirical analysis of information diversity for Dutch and Turkish Twitter usersEvents and controversies: Influences of a shocking news event on information seeking
  • Finished and committed the CrawlService changes. Jenkens wasn’t working for some reason, so we spun on that for a while. Tested and validated on the Integration sysytem.
  • Worked some more on the Rating App. It compiles all the new persisted types in the new DB. Realized that the full website text should be in the result, not the rating.
  • Modified Margarita’s test file to use Theresa’s list of doctors.
  • Wrote up some notes on why a graph DB and UGC might be a really nice way to handle the best practices part of the task

Phil 3.9.16

7:00 – 2:30 VTX

  • Good discussion with Wayne yesterday about getting lost in a car with a passenger.
    • The equivalent of a trapper situated in an environment who may not know where he is but is not lost is analogous to people exchanging information where the context is well understood, but new information is being created in that context. Think of sports enthusiasts or researchers. More discussion will happen about the actions in the game than the stadium it was played in. Similarly, the focus of a research paper is the results as opposed to where the authors appear in the document. Events can transpire to change that discussion (The power failure at the 2013 Superbowl, for example) but even then most of the discussion involves how the blackout affected gameplay.
    • Trustworthy does not mean infallible. GPS gets things wrong, but we still depend on it. It has very high system trust. Interestingly, a Google Search of ‘GPS Conspiracy’ returns no hits about how GPS is being manipulated, while ‘Google Search Conspiracy’ returns quite a few appropriate hits.
    • GPS can also be considered a potential analogy to how our information gathering behaviors will evolve. Where current search engines index and rank existing content, a GPS synthesises a dynamic route based on an ever-increasing set of constraints (road type, tolls, traffic, weather, etc). Similarly, computational content generation (of which computational journalism is just one of the early trailblazers) will also generate content that is appropriate for the current situation (in 500 feet turn right). Imagine a system that can take a goal “I want to go to the moon” and creates an assistant that constantly evaluates the information landscape to create a near optimal path to that goal with turn-by-turn directions.
    • Studying how to create Trustworthy Anonymous Citizen Journalism is important then for:
      • Recognising individuals for who they are rather than who they say they are
      • Synthesizing trustworthy (quality?) content from the patterns of information as much as the content (Sweden = boring commute, Egypt = one lost, 2016 Republican Primaries = lost and frustrated direction asking, etc). The dog that doesn’t bark is important.
      • Determining the kind of user interfaces that create useful trustworthy information on the part of the citizen reporters and the interfaces and processes that organize, synthesise, curate and rank the content to the news consumer.
      • Providing a framework and perspective to provide insight into how computational content generation potentially reshapes Information Retrieval as it transitions to Information Goal Setting and Navigation.
  • Continuing A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web.
  • Finish tests – Done. Found a bug!
  • Submit paperwork for Wall trip in Feb. Done
  • Get back to JPA
    • Set up new DB.
    • Did the initial populate. Now I need to add in all the new data bits.
  • Margarita sent over a test json file. Verified that it worked and gave her kudos.

Phil 3.8.16

7:00 – 3:00 VTX

  • Continuing A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web. Dense paper, slow going.
    • Ok, Figure 3 is terrible. Blue and slightly darker blue in an area chart? Sheesh.
    • Here’s a nice nugget though regarding detecting fake reviews using machine learning: For assessing spam product reviews, three types of features are used [Jindal and Liu 2008]: (1) review-centric features, which include rating- and text-based features; (2) reviewer-centric features, which include author based features; and (3) product-centric features. The highest accuracy is achieved by using all features. However, it performs as efficiently without using rating-based features. Rating-based features are not effective factors for distinguishing spam and nonspam because ratings (feedback) can also be spammed [Jindal and Liu 2008]. With regard to deceptive product reviews, deceptive and truthful reviews vary concerning the complexity of vocabulary, personal and impersonal use  of language, trademarks, and personal feelings. Nevertheless, linguistic features of a text are simply not enough to distinguish between false and truthful reviews. (Comparison of deceptive and truthful travel reviews). Here’s a later paper that cites the previous. Looks like some progress has been made: Using Supervised Learning to Classify Authentic and Fake Online Reviews 
    • And here’s a good nugget on calculating credibility. Correlating with expert sources has been very important: Examining approaches for assessing credibility or reliability more closely indicates that most of the available approaches use supervised learning and are mainly based on external sources of ground truth [Castillo et al. 2011; Canini et al. 2011]—features such as author activities and history (e.g., a bio ofan author), author network and structure, propagation (e.g., a resharing tree of a post and who shares), and topical-based affect source credibility [Castillo et al. 2011; Morris et al. 2012]. Castillo et al. [2011] and Morris et al. [2012] show that text- and content-based features are themselves not enough for this task. In addition, Castillo et al. [2011] indicate that authors’ features are by themselves inadequate. Moreover, conducting a study on explicit and implicit credibility judgments, Canini et al. [2011] find that the expertise factor has a strong impact on judging credibility, whereas social status has less impact. Based on these findings, it is suggested that to better convey credibility, improving the way in which social search results are displayed is required [Canini et al. 2011]. Morris et al. [2012] also suggest that information regarding credentials related to the author should be readily accessible (“accessible at a glance”) due to the fact that it is time consuming for a user to search for them. Such information includes factors related to consistency (e.g., the number of posts on a topic), ratings by other users (or resharing or number of mentions), and information related to an author’s personal characteristics (bio, location, number of connections).
    • On centrality in finding representative posts, from Beyond trending topics: Real-world event identification on twitterThe problem is approached in two concrete steps: first by identifying each event and its associated tweets using a clustering technique that clusters together topically similar posts, and second, for each cluster of event, posts are selected that best represent the event. Centrality-based techniques are used to identify relevant posts with high textual quality and are useful for people looking for information about the event. Quality refers to the textual quality of the messages—how well the text can be understood by any person. From three centrality-based approaches (Centroid, LexRank [Radev 2004], and Degree), Centroid is found to be the preferred way to select tweets given a cluster of messages related to an event [Becker et al. 2012]. Furthermore, Becker et al. [2011a] investigate approaches for analyzing the stream of tweets to distinguish between relevant posts about real-world events and nonevent messages. First, they identify each event and its related tweets by using a clustering technique that clusters together topically similar tweets. Then, they compute a set of features for each cluster to help determine which clusters correspond to events and use these features to train a classifier to recognizing between event and nonevent clusters.
  • Meeting with Wayne at 4:15
  • Crawl Service
    • had the ‘&q=’ part at the wrong place
    • Was setting the key = to the CSE in the payload, which caused much errors. And it’s working now! Here’s the full payload:
      {
       "query": "phil+feldman+typescript+angular+oop",
       "engineId": "cx=017379340413921634422:swl1wknfxia",
       "keyId": "key=AIzaSyBCNVJb3v-FvfRbLDNcPX9hkF0TyMfhGNU",
       "searchUrl": "https://www.googleapis.com/customsearch/v1?",
       "requestId": "0101016604"
      }
    • Only the “query” field is required. There are hard-coded defaults for engineId, keyId and searchUrlPrefix
    • Ok, time for tests, but before I try them in the Crawl Service, I’m going to try out Mockito in a sandbox
    • Added mockito-core to the GoogleCSE2 sandbox. Starting on the documentation. Ok – that makes sense
    • Added SearchRequestTest to CrawlService

Phil 3.7.16

VTX 8:00 – 5:00

  • Continuing A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web. Also, Wayne found a very interesting paper at CSCW: On the Wisdom of Experts vs. Crowds : Discovering Trustworthy Topical News in Microblogs. Looks like they are able to match against known experts. System trust plus expert trust? Will read in detail later.
  • I’ve been trying to simplify the concept of information bubbles and star patterns, particularly based on the Sweden and Gezi/Egypt papers, and I started thinking about getting lost in a car as a simple model. A car or other shared vehicle is interesting because it’s not possible to leave when it’s moving, and it’s rarely practical to leave anywhere other than the start and destination.
  • For case 1, imagine that you are in a car with a passenger driving somewhere that you both know, like a commute to work. There is no discussion of the route unless something unusual happens.Both participants look out over the road and see things that they recognise so the confidence that they are where they should be is high. The external world reinforces their internal model.
  • For case 2, imagine that two people are driving to a location where there is some knowledge of the area, but one person believes that they are lost and one person believes that they are not. My intuition is that this can lead to polarizing arguments, where each party points to things that they think that they know and use it to support their point.
  • In case 3, both people are lost. At this point, external information has to be trusted and used. They could ask for directions, get a map, etc. These sources have to be trusted, but they may not be trustworthy. Credibility cues help determine who gets asked. As a cyclist, I get asked for directions all the time, because people assume me to be local. I have also been the second or third person asked by someone who is lost. They are generally frustrated and anxious. And if I am in an area I know, and speak with authority, the relief I see is palpable.
  • Case 4 is a bit different and assumes the presence of an expert. It could be a GPS or a navigator, such as is used in motorsports like the WRC. Here, trust in the expert is very high. So much so that misplaced trust in GPS has lead to death. In this case, the view out the window is less important than the expert. The tendency to follow and ignore the evidence is so high that the evidence has to pile up in some distinctive way to be acknowledged.
  • Case 5 is kind of the inverse of case four. Imagine that there are two people in a vehicle who are trained in navigation as opposed to knowing a route. I’m thinking of people who live in the wilderness, but there are also navigation games like rallyes. In this case, the people are very grounded in their environment and never really lost, so I would expect their behavior to be different
  • These five cases to me seem to contain the essence of the difference between information bubbles and and star patterns. In a cursory look through Google Scholar, I haven’t seen much research into this. What I have found seems to be related to the field of Organizational Science. This is the best I’ve found so far:
  • Anyway, it seems possible to make some kind of simple multiplayer game that explores some of these concepts and would produce a very clean data set. Generalizations could definitely carry over to News, Politics, Strategy, etc.
  • Need to think about bias.
  • Starting on Crawl Service
    • Running the first gradle build clean in the command line. I’m going to see if this works without intellij first
    • Balaji said to set <serviceRegistry>none</serviceRegistry> in the srs/main/resources crawlservice-config.xml, but it was already set.
    • Found the blacklist there too. Might keep it anyway. Or is it obsolete?
    • To execute is  java -jar build/libs/crawlservice.war
    • Trying to talk to CrawlService. Working in Postman on http://localhost:8710/crawlservice/search
    • changed the SearchRequest.java and CrawlRequest.java to be able to read in and store arguments
    • Had to drill into SearchQuery until I saw that SearchRequest is buried in there.
    • Trying to put together the uri in GoogleWebSearch.getUri to handle the SearchRequest.
    • A little worried about there not being a CrawlQuery
    • It builds but I’m afraid to run it until tomorrow.
  • Still hanging fire on updating the JPA on the new curation app..

Phil 3.4.16

VTX 7:00 – 5:00

  • Continuing A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web
    • Adding N. Diakopoulos and M. Naaman. Topicality, Time, and Sentiment in Online News CommentsConference on Human Factors in Computing Systems (CHI) Works in Progress. May, 2011. [PDF] Short! Yay!
    • Added Adaptive Faceted Ranking for Social Media Comments. I think it may touch on my idea of Pertinence ranking using Markov Chains.
  • Scanned Exploiting Social Context for Review Quality Prediction and realized that it’s got some very good hints for markers that can be used to use for machine learning on the doctor records
    Feature Name 	Type 		Feature Description
    NumToken 	Text-Stat 	Total number of tokens.
    NumSent 	Text-Stat 	Total number of sentences.
    UniqWordRatio 	Text-Stat 	Ratio of unique words
    SentLen 	Text-Stat 	Average sentence length.
    CapRatio 	Text-Stat 	Ratio of capitalized sentences.
    POS:NN 		Syntactic 	Ratio of nouns.
    POS:ADJ 	Syntactic 	Ratio of adjectives.
    POS:COMP 	Syntactic 	Ratio of comparatives.
    POS:V: 		Syntactic 	Ratio of verbs.
    POS:RB 		Syntactic 	Ratio of adverbs.
    POS:FW 		Syntactic 	Ratio of foreign words.
    POS:SYM 	Syntactic 	Ratio of symbols.
    POS:CD 		Syntactic 	Ratio of numbers.
    POS:PP 		Syntactic 	Ratio of punctuation symbols.
    KLall 		Conformity 	KL div DKL(Tr||Ti)
    PosSEN 		Sentiment 	Ratio of positive sentiment words.
    NegSEN 		Sentiment 	Ratio of negative sentiment words.
  • This means I need to store the whole page in the rating app so that I can evaluate machine ratings after getting human ratings.
  • Finished the UI part of the display, now to change the DB back end. I’m going to start the DB over again since there is so much new stuff.
  • Cleaning up classes. Moved LoginDialog and CheckboxGroup to utils.
  • Meeting about the relative merits of StanfordNLP and Rosette. We’ll stick with Stanford for now. I have some questions about how Webhose.io will be handled, but Aaron thinks that it can be filtered in the TAS, with a query string preprocessor.

Phil 3.3.16

7:00 – 4:30 VTX

Phil 3.2.16

5:00-ish 4:00 – VTX

  • Call Charlestown
  • Meeting with Dr. Pan
    • The new ground truth framework looks good. Saving outbound and inbound links is also worth doing.
    • Beware of low percentage patterns. finding the 1% answer is very hard for machine learning, while finding the 49% answer is much better.
    • SVMs are probably a good way to start since they are resistant to overfitting
    • Multiple passes may be required to filter the data to get a meaningful result. Patterns like the .edu/.gov ratio may be very helpful
    • The subReddit Change My View is an interesting UGC site that should provide good examples of information networks on both sides of a controversial point, and a measure of success. It would certainly be interesting to do a link analysis.
  • Starting on A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web. If I’m right, I should have a Game Theory/Information Economics model to frame this. Here’s hoping.
    • As an aside, parsing my saved documents to get authors, general terms, and ACM Reference Format terms should be done to compare the produced networks. Looks like PDFBox should do the trick.
    • Elaheh Momeni – Lots of stuff on UGC
      • Data Mining
      • Collective Intelligence
      • Machine Learning
      • User Generated Content Mining
      • Social Computing
    • Claire Cardie
      • argument mining and argument generation including the identification of supported vs. unsupported claims and opinions,
      • social-computational methods for improving communication and interactions in on-line settings,
      • NLP for e-rulemaking,
      • sentiment analysis: extraction and summarization of fine-grained opinions in text,
      • discourse-aware methods for opinion and argument extraction,
      • deception detection in on-line reviews,
      • noun phrase coreference resolution.
    • Nick Diakopoulos
      • Research in computational and data journalism with an emphasis on algorithmic accountability, narrative data visualization, and social computing in the news.
  • New Weapon in Day Laborers’ Fight Against Wage Theft: A Smartphone App – NYTimes. Short documentary on YouTube. Sol Aramendi is the author?
  • Spent time when I should be sleeping thinking about rating webpages. Rather than the current single list, I think at least four categories are needed:
    • Accessible yes/no (404, etc)
    • Match – did the person show up yes/no/possible-can’t tell
    • Target Characterization
      • Positive – gave to charity, published a paper
      • Neutral – phone book listing
      • Negative – conviction, confession
    • Source type
      • Official Document
      • Home Page
      • Microblog
      • Blog
      • News organization
      • Federal Government
      • State Government
      • Commercial Entity – Rating site, etc
      • Non-commercial Entity – Nonprofit, clubs, interest group
      • Educational – yearbook, program, course listing
      • Machine-generated for unclear purpose
      • Spam
    • Content Characterization (can be multiple)
      • Medical
      • Legal
      • Commercial
      • Official
      • Marketing
      • Other
      • Spam
    • Quality Characterization
      • Low – confusing, conflicting unrelated information
      • Minimal – some useful information (Machine harvested from better sources)
      • High – clear, providing high quality information
    • Source Characterization
      • Very trustworthy – I’d give them my SSN
      • Trustworthy – I’d use a credit card here
      • Credible – I’d use this site to support an argument
      • Neutral – Not sure, but wouldn’t avoid
      • Not Credible – Not rooted in things that I believe/trust
      • Distrustworthy – I’m pretty sure this site is misinformation
      • Very Distrustworthy – Conspiracy theories, Lizardmen, etc
    • Relevant Text – In addition, I think we need a text area that the user can paste text from the webpage that contains the match in context, or something that exemplifies the source characterisation
    • Notes – To cover anything that’s not covered above
  • So now Gregg is handling Crawl Service file generation?
  • Discussion with Katy and Jeremy about the list above?
  • Pondering how to adjust the ratingObject everything is a string, except for content characterization, which can have multiples. I could do a bitfield or a separate table. Leaning towards the bitfieled.

Phil 3.1.16

7:00 – 4:30 VTX