Phil 2.29.16

7:00 – 3:00 VTX

  • Seminar today, sent Aaron a reminder.
    • Some discussion about your publication quantity. Amy suggests 8 papers as the baseline for credibility: So here are some preliminary thoughts about what could come out of my work:
      • Page Rank document return sorting Pertinence
      • User Interfaces for trustworthy input
      • Rating the raters / harnessing the Troll
      • Trustworthiness inference using network shape
      • Adjusting relevance through GUI pertinence
      • Something about ranking of credibility cues – Video, photos, physical presence, etc.
      • Something about the patterns of posting indicating the need for news. Sweden vs. Gezi. And how this can be indicative of emerging crisis informatics need
      • Something about fragment synthesis across disciplines and being able to use it to ‘cross reference’ information?
      • Fragment synthesis vs. community fragmentation.
    • 2013 SenseCam paper
    • Narrative Clip
  • Continuing Incentivizing High-quality User-Generated Content.
    • Looking at the authors
    • The proportional mechanism therefore improves upon the baseline mechanism by disincentivizing q = 0, i.e., it eliminates the worst reviews. Ideally, we would like to be able to drive the equilibrium qualities to 1 in the limit as the number of viewers, M, diverges to infinity; however, as we saw above, this cannot be achieved with the proportional mechanism.
    • This reflects my intuition. The lower the quality of the rating, the worse the proportional rating system is, and the lower the bar for quality for the contributor. The three places that I can think of offhand that have high-quality UCG (Idea Channel, StackOverflow and Wikipedia) all have people rating the data (contextually!!!) rather than a simple up/downvote.Idea Channel – The main content creators read the comments and incorporate the best in the subsequent episode.Stackoverflow – Has become a place to show of knowledge, and there are community mechanisms of enforcement, and the number of answers are low enough that it’s possible to look over all of them.Others that might be worth thinking aboutQuora? Seems to be an odd mix of questions. Some just seem lazy (how do I become successful) or very open ended (What kind of guy is Barak Obama). The quality of the writing is usually good, but I don’t wind up using it much. So why is that?Reddit. So ugly that I really don’t like using it. Is there a System Quality/Attractiveness as well as System Trust?

      Slashdot. Good headline service, but low information in the comments. Occasionally something insightful, but often it seems like rehearsed talking points.

    • So the better the raters, the better the quality. How can the System evaluate rater quality? Link analysis? Pertinence selection? And if we know a rater is low-quality, can we use that as a measure in its own right?
  • Trying to test the redundant web page filter, but the urls for most identical pages are actually slightly different:
  • I think tomorrow I might parse the URL or look at page content. Tomorrow.