Phil 3.1.18

7:00 – 4:30 ASRC MKT

  • Anonymize (done) and submit paper – done!
  • Finish T’s timeline approach? Finished my version. I think I like it.
  • This may be important:
    • We’re committing Twitter to help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.11:33 AM – 1 Mar 2018 from San Francisco, CA
      Our friends at @cortico and @socialmachines introduced us to the concept of measuring conversational health. They came up with four indicators: shared attention, shared reality, variety of opinion, and receptivity. Read about their work here:
    • We simply can’t and don’t want to do this alone. So we’re seeking help by opening up an RFP process to cast the widest net possible for great ideas and implementations. This will take time, and we’re committed to providing all the necessary resources. RFP:


  • Interactive topic hierarchy revision for exploring a collection of online conversations
    • In the last decade, there has been an exponential growth of asynchronous online conversations (e.g. blogs), thanks to the rise of social media. Analyzing and gaining insights from such discussions can be quite challenging for a user, especially when the user deals with hundreds of comments that are scattered around multiple different conversations. A promising solution to this problem is to automatically mine the major topics from conversations and organize them into a hierarchical structure. However, the resultant topic hierarchy can be noisy and/or it may not match the user’s current information needs. To address this problem, we introduce a novel human-in-the-loop approach that allows the user to revise the topic hierarchy based on her feedback. We incorporate this approach within a visual text analytics system that helps users in analyzing and getting insights from conversations by exploring and revising the topic hierarchy. We evaluated the resulting system with real users in a lab-based study. The results from the user study, when compared to its counterpart that does not support interactive revisions of a hierarchical topic model, provide empirical evidence of the potential utility of our system in terms of both performance and subjective measures. Finally, we summarize generalizable lessons for introducing human-in-the-loop computation within a visual text analytics system
  • Understanding the Promise and Limits of Automated Fact-Checking
    • The furor over so-called ‘fake news’ has exacerbated long-standing concerns about political lying and online rumors in a fragmented media environment, drawing attention to the potential of various automated fact-checking (AFC) technologies to combat online misinformation. This factsheet gives an overview of current efforts to automatically police false claims and misleading content online. Based on a review of recent research and interviews with both fact-checkers and computer scientists working in this area, we find that:
      • Much of the terrain covered by human fact-checkers requires a kind of judgement and sensitivity to context that remains far out of reach for fully automated verification. 
      • Despite progress in automatic verification of a narrow range of simple factual claims, AFC systems will require human supervision for the foreseeable future.
      • The promise of AFC technologies for now lies in tools to assist fact-checkers to identify and investigate claims, and to deliver their conclusions, as effectively as possible.
  • More BIC
    • Now it is the case, and increasingly widely recognized to be, that in games in general there’s no way players can rationally deliberate to a Nash equilibrium. Rather, classical canons of rationality do not in general support playing in Nash equilibria. So it looks as though shared intentions cannot, in the general run of games, by classical canons, be rationally formed! And that means in the general run of life as well. This is highly paradoxical if you think that rational people can have shared intentions. The paradox is not resolved by the thought that when they do, the context is not a game: any situation in which people have to make the sorts of decisions that issue in shared intentions must be a game, which is, after all, just a situation in which combinations of actions matter to the combining parties. (pg 139)
    • Turn to the idea that a joint intention to do (x,y) is rationally produced in 1 and 2 by common knowledge of two conditional intentions: Pl has the intention expressed by ‘I’ll do x if and only if she does y’, and P2 the counterpart one. Clearly P1 doesn’t have the intention to do x if and. only if P2 in fact does y whether or not Pl believes P2 will do y; the right condition must be along the lines of:
      (C1) P1 intends to do x if and only if she believes P2 will do y. (pg 139)

      • So this is in belief space, and belief is based on awareness and trust
    • There are two obstacles to showing this, one superable, the other not, I think. First, there are two Nash equilibria, and nothing in the setup to suggest that some standard refinement (strengthening) of the Nash equilibrium condition will eliminate one. However, I suspect that my description of the situation could be refined without ‘changing the subject’. Perhaps the conditional intention Cl should really be ‘I’ll do x if and only if she’ll do y, and that’s what I would like best’. For example, if x and y are the two obligations in a contract being discussed, it is natural to suppose that Pl thinks that both signing would be better than neither signing. If we accept this gloss then the payoff structure becomes a Stag Hunt – Hi-Lo if both are worse off out of equilibrium than in the poor equilibrium (x’ ,y’). To help the cause of rationally deriving the joint intention (x,y), assume the Hi-Lo case. What are the prospects now? As I have shown in chapter 1, there is no chance of deriving (x,y) by the classical canons, and the only (so far proposed) way of doing to is by team reasoning. (pg 140)
    • The nature of team reasoning, and of the conditions under which it is likely to be primed in individual agents, has a consequence that gives further support to this claim. This is that joint intentions arrived at by the route of team reasoning involve, in the individual agents, a ‘sense of collectivity’. The nature of team reasoning has this effect, because the team reasoner asks herself not ‘What should I do?’ but ‘What should we do?’ So, to team-reason, you must already be in a frame in which first-person plural concepts are activated. The priming conditions for team reasoning have this effect because, as we shall see later in this chapter, team reasoning, for a shared objective, is likely to arise spontaneously in an individual who is in the psychological state of group-identifying with the set of interdependent actors; and to self-identify as a member of a group essentially involves a sense of collectivity. (pg 141)
  • Starting on ONR white paper – first pass banged together
    • Need to add figures and references
  • discovered pandoc, which converts nicely between many files, including LaTex and word. The command that matters is:
    pandoc -s foo.tex -o foo.docx