7:00 – 3:00 VTX
- Seminar today, sent Aaron a reminder.
- Some discussion about your publication quantity. Amy suggests 8 papers as the baseline for credibility: So here are some preliminary thoughts about what could come out of my work:
- Page Rank document return sorting Pertinence
- User Interfaces for trustworthy input
- Rating the raters / harnessing the Troll
- Trustworthiness inference using network shape
- Adjusting relevance through GUI pertinence
- Something about ranking of credibility cues – Video, photos, physical presence, etc.
- Something about the patterns of posting indicating the need for news. Sweden vs. Gezi. And how this can be indicative of emerging crisis informatics need
- Something about fragment synthesis across disciplines and being able to use it to ‘cross reference’ information?
- Fragment synthesis vs. community fragmentation.
- 2013 SenseCam paper
- Narrative Clip
- Some discussion about your publication quantity. Amy suggests 8 papers as the baseline for credibility: So here are some preliminary thoughts about what could come out of my work:
- Continuing Incentivizing High-quality User-Generated Content.
- Looking at the authors
- Arpita Ghosh. Lots of good stuff. Revisit later
- Social Computing and User-generated Content: A Game-Theoretic Approach. Arpita Ghosh. SigEcom Exchanges, Vol 11.2, December 2012.
- Incentives in Human Computation: HCOMP, November 2013.
- Truthful Assignment without Money. Shaddin Dughmi, Arpita Ghosh. Proc. 11th ACM Conference on Electronic Commerce (EC), 2010.
- R. Preston McAfee currently works as chief economist at Microsoft. Previously, he was an economist at Google. Before that he was a Vice President and Research Fellow at Yahoo! Research where he led the Microeconomics and Social Systems group. Also has the ugliest home page I’ve seen since the late ’90s.
- The Wisdom of Smaller, Smarter Crowds, EC 2014:Proceedings of the 15th ACM Conference on Economics and Computation, 2014 (with Dan Goldstein and Sid Suri).
- Arpita Ghosh. Lots of good stuff. Revisit later
- The proportional mechanism therefore improves upon the baseline mechanism by disincentivizing q = 0, i.e., it eliminates the worst reviews. Ideally, we would like to be able to drive the equilibrium qualities to 1 in the limit as the number of viewers, M, diverges to infinity; however, as we saw above, this cannot be achieved with the proportional mechanism.
- This reflects my intuition. The lower the quality of the rating, the worse the proportional rating system is, and the lower the bar for quality for the contributor. The three places that I can think of offhand that have high-quality UCG (Idea Channel, StackOverflow and Wikipedia) all have people rating the data (contextually!!!) rather than a simple up/downvote.Idea Channel – The main content creators read the comments and incorporate the best in the subsequent episode.Stackoverflow – Has become a place to show of knowledge, and there are community mechanisms of enforcement, and the number of answers are low enough that it’s possible to look over all of them.Others that might be worth thinking aboutQuora? Seems to be an odd mix of questions. Some just seem lazy (how do I become successful) or very open ended (What kind of guy is Barak Obama). The quality of the writing is usually good, but I don’t wind up using it much. So why is that?Reddit. So ugly that I really don’t like using it. Is there a System Quality/Attractiveness as well as System Trust?
Slashdot. Good headline service, but low information in the comments. Occasionally something insightful, but often it seems like rehearsed talking points.
- So the better the raters, the better the quality. How can the System evaluate rater quality? Link analysis? Pertinence selection? And if we know a rater is low-quality, can we use that as a measure in its own right?
- Looking at the authors
- Trying to test the redundant web page filter, but the urls for most identical pages are actually slightly different:
- I think tomorrow I might parse the URL or look at page content. Tomorrow.