Phil 3.15.16

7:00 -4:00

  • Algorithm of Discovery – New Ideas From Computers
    • Jantzen, B. “Discovery without a ‘logic’ would be a miracle,” Synthese (forthcoming). (preprint)
    • From Jantzen’s current work:
      • My current work is focused on a suite of interrelated questions about natural kinds and the logic of discovery. I am attempting to test ideas about the logic of discovery by building algorithms that carry out automated scientific discovery. In particular, I’m interested in algorithms capable of generating novel ontologies or, put less grandly, novel sets of variables that may cross-cut those provided as input. Working drafts are available for some components and derivatives of this work (I’ll make more available soon). My paper arguing, contrary to received wisdom, that there must exist a logic of discovery can be found here. With regard to natural kinds, a paper in which I apply what I call the ‘dynamical kinds’ approach to the problem of levels of selection can be found here. A talk I gave on the same material at the International Conference on Evolutionary Patterns in Lisbon can be found here.
  • The Google UX Van came by yesterday and they asked what would I like from Google and I realized that an Ad-Free subscription would be worth some price. Interestingly, that’s kind of available through Custom Search. It would probably run 100 – 250/year doing it this way. And I’ve already got my keys, and you can point the omnibox to anything. Have to think about this…
  • Continuing A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web.
    • Presenting diverse political opinions: how and how much. From the abstract:  We find individual differences: some people are diversity-seeking while others are challenge-averse. That sounds a lot like Star-and-Bubble to me!
    • Finished! Long but good. Something like 65 quotes.
  • Starting Presenting diverse political opinions: how and how much.
  • I’m beginning to wonder if there is a way to have an interface that drives users to the confirming or exploratory camps. Basically see it that can be used to present cleaner data. (see highlighted below)


  • Knowledge Graphs versus Hierarchies: An Analysis of User Behaviours and Perspectives in Information Seeking
    • Lookup, learn and investigate, last two are exploratory
    • Knowledge Graph built from semantic relationships
    • Watch behaviors based on these two representation(hierarchy vs graph)
    • Context on the left, display on the right.
    • Users interacted with graph structure more, and hierarchies sent users to the underlying documents.
  • Exploring the Use of Query Auto Completion: Search Behavior and Query Entry Profiles
    • Does pulling from the QAC list imply looking for confirmation?
    • Is this the kind of key that I was looking for above?
    • As an aside, there needs to be a ‘benchmark query + SERP’ that allows for monitoring google for changes.Watching the watchers
      • What would the queries be?
      • How are the results evaluated?
      • What about topicality WRT queries? Should some be topical and others ‘classic’
      • SERP vs QAC storage and evaluation?
      • Other search engines to watch (Google, Bing, DuckDuckGo…)
  • What Affects Word Changes in Query Reformulation During a Task-based Search Session?
    • Query vs. SERP using text analytics
    • Reusing a word in a search is almost always a return from a subtopic to a main theme.
      • So if the subject of the query is a more specific version of the subject in the previous search, then we can get some interesting insights into the way that they are looking at the problem they are trying to solve.
    • Also via one of the authors Cathy Smith: Helen Nissenbaum Legal definitions of trust, etc.
  • Playing Your Cards Right: The Effect of Entity Cards on Search Behaviour and Workload
    • Non-linear results page???
    • HIT to find information on Axle Rose. Turk & CrowdFlower
    • Bing API
    • Marked Relevant or not
    • Used the Wikipedia disambiguation pages to find ambiguous terms. Nice…
    • Arbitrary insertion of non-relevant topics that were lexically similar.
    • Attention paying questions to mark credible responses.
    • MANOVA, ANOVA, with Bonferroni
    • Query reformulation happens quicker when the card is off-topic
    • No significance for card coherence. Habituation?
    • Diverse cards have photos, which are credibility cues. If the photos are right, that’s reinforcing. If the photos are wrong, then that’s a very visible warning that the results aren’t credible. Which also makes me wonder how photos affect the psychology of the users. And system trust.
  • Impacts of Time Constraints and System Delays on User Experience

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.