Monthly Archives: September 2016

Phil 9.9.16

7:00 – 5:00  ASRC

  • Finished section 3.14
  • Back to reading Data Mining.Currently on Chapter 3. Done!
  • Chapter 4.
  • Discussion with Aaron, then Bob about sprint-ish planning.

Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration

Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration

  • Jonathan D. Cohen
  • Samuel M. McClur
  • Angela J. Yu
  • …often our decisions depend on a higher level choice: whether to exploit well known but possibly suboptimal alternatives or to explore risky but potentially more profitable ones. How adaptive agents choose between exploitation and exploration remains an important and open question that has received relatively limited attention in the behavioural and brain sciences. The choice could depend on a number of factors, including the familiarity of the environment, how quickly the environment is likely to change and the relative value of exploiting known sources of reward versus the cost of reducing uncertainty through exploration.
  • The need to balance exploitation with exploration is confronted at all levels of behaviour and time-scales of decision making from deciding what to do next in the day to planning a career path.
  • The significance of Gittins’ contribution is that it reduced the decision problem to computing and comparing these scalar indices. In practice, computing the Gittins index is not tractable for many problems for which it is known to be optimal. However, for some limited problems, explicit solutions have been found. For instance, the Gittins index has been computed for certain two armed bandit problems (in which the agent chooses between two options with independent probabilities of generating a reward), and compared to the foraging behaviour of birds under comparable circumstances; the birds were found to behave approximately optimally
  • Perhaps, the most important exception to Gittins’ assumptions is that real-world environments are typically non-stationary; i.e. they change with time. To understand how organisms manage the balance between exploration and exploitation in non-stationary environments, investigators have begun to study how organisms adapt their behaviour in response to the experimentally induced changes in reward contingencies. Several studies have now shown that both humans and other animals dynamically update their estimates of rewards associated with specific courses of action, and abandon actions that are deemed to be diminishing in value in search of others that may be more rewarding
  • At the same time, there is also longstanding evidence that humans sometimes exhibit an opposing tendency. When reward diminishes (e.g. following an error in performance), subjects often try harder at what they have been doing rather than less (e.g. Rabbitt 1966; Laming 1979; Gratton et al. 1992).
  • The balance between exploration and exploitation also seems to be sensitive to time horizons. Humans show a greater tendency to explore when there is more time left in a task, presumably because this allows them sufficient time later to enjoy the fruits of those explorations (Carstensen et al. 1999). – is this related to (lack of) stress? Something about cognitive bandwidth?
  • Bandit problems are well suited for studying the tension between exploitation and exploitation since they offer a direct trade-offbetween exploiting a known source of reward (continuing to play one arm of the bandit) and exploring the environment (trying other arms) to acquire information about other sources of reward
  • The investigators found that the time at which birds stopped exploring (operationalized as the point at which they stayed at one feeding post) closely approximated that predicted by the optimal solution. Despite their findings, Krebs et al. (1978) recognized that it was highly unlikely that their birds were carrying out the complex calculations required by the Gittins index. Rather, they suggested that the birds were using simple behavioural heuristics that produces exploration times that qualitatively approximate the optimal solution – this might be good for the modelling section.
  • Nevertheless, to our knowledge, the Daw et al. (2006)study was the first to address formally the question of how subjects weigh exploration against exploitation in a non-stationary, but experimentally controlled environment. It also produced some interesting neurobiological findings. Their subjects performed the n-armed bandit task while being scanned using functional magnetic resonance imaging (fMRI). Among the observations reported was task-related activity in two sets of regions of prefrontal cortex (PFC). One set of regions was in ventromedial PFC and was associated with both the magnitude of reward associated with a choice, and that predicted by their computational model of the task (using the softmax decision rule). This area has been consistently associated with the encoding of reward value across a variety of task domain – biological basis for different behaviors
  • Yu & Dayan (2005) proposed that a critical function of two important neuromodulators—acetylcholine (ACh) and norepinephrine (NE)—may be to signal expected and unexpected sources of uncertainty. While the model they developed for this was not intended to address the trade-off between exploitation and exploration, the distinction between expected and unexpected uncertainty is likely to be an important factor in regulating this trade-off. For example, the detection of unexpected uncertainty can be an important signal of the need to promote exploration.
  • …the distinction between expected and unexpected forms ofuncertainty may be an important element in choosing between exploitation versus exploration. As long as prediction errors can be accounted for in terms of expected uncertainty—that is the amount that we expect a given outcome to vary—then all other things being equal (e.g. ignoring potential non-stationarities in the environment), we should persist in our current behaviour (exploit). However, if errors in prediction begin to exceed the degree expected—i.e. unexpected uncertainty mounts—then we should revise our strategy and consider alternatives (explore).
  • Yu & Dayan (2005) proposed that ACh levels are used to signal expected uncertainty, and NE to signal unexpected uncertainty. They describe a computationally tractable algorithm by which these maybe estimated that approximates the Bayesian optimal computation of those estimates. Furthermore, they proposed how these estimates, reflected by NE and ACh levels, could be used to determine when to revise expectations

Phil 9.8.16

7:00 – 4:00 ASRC

  • Shimei and Wayne have responded to the Doodle that I sent out last night. No reading as a result.
  • Need to write an abstract!
  • Working through the TODOs
  • Lunchtime ride thoughts
    • Social Trust is the prisoner’s dilemma. It depends on negotiation. The natural communication is stories. Behaviors are dominance, submission, rejection, etc…  God of stories
    • System Trust is the multi-armed bandit problem. It depends on navigation. The natural communication is diagrams and maps. Behaviors are explore/exploit
    • Collection Trust is about storage and access. It depends on counting The natural communication is lists and numbers. Behaviors are organizing, misplacing, loosing, hoarding, etc
    • Knowledge Dieties
  • True to my word, I now have a WebExceptionHandler that launches stackoverflow.
  • Need to register for EMNLP 2016. Early registration ends October 1.
  • Reviewing Chapter2
  • Reviewing Chapter 3
  • Discussion with Aaron about building corpora
    • Build a LanguageModelNetworks browser using WebView. Backend connects to DB for clickstream logging, page storage, CSEs, etc.
      • Name/select the collection that’s being worked on
      • Enter the search term(s)
      • Results come back from the specified CSEs
      • When a page is found that looks good, add it to the collection
        • TF-IDF and centrality is calculated based on the updated corpus (tab for the current display that allows for manipulation). Top n words are made available for insertion into the search term
        • Tag the page with some kind of smart, integrated tagger?
      • Rinse, lather, repeat.

Phil 9.7.16

7:00 – 4:30 ASRC

  • Sent a follow up note to Shimei. Regardless, I’ll send out the schedule options this evening. Thinking about the 27th and 28th as my preference. Structure so that it begins before rush hour and ends after for Thom?
  • Fixed the research through design section to focus on explicitly designing fo behaviors.
  • Need to read up on what worked yesterday – NaiveBayes, SGD (stochastic gradient descent), SMO (sequential minimal optimization algorithm for training a support vector classifier)
    • NaiveBayes
    • SGD
    • SMO
  • Try implementing NB in straight Java?
  • Talked to Bob and was re-inspired to build a static method that fires off the browser to stackoverflow with the exception.

Phil 9.6.16

7:00 – 4:30 ASRC

  • Have everyone’s schedule for proposal but Shimei.
  • Saw an interesting article in Gamasutra on Behaviourism, In-game Economies and the Steam Community Market, which led me to get Hooked: How to Build Habit-Forming Products, which should be good for gamification of the UI
  • Working on section 1.6 – What the rest of the proposal looks like. Kinda done?
  • Just found a blog post that mentions this reviewer guideline for registered reports, which is kind of like a study proposal, where the research methods of a paper are submitted before the study is done. Interesting. Need to make sure that my proposal fits with this…
  • Back to WEKA and the analysis of the physician data.
    • Overall stats – 30 ‘good’, 12 junk, per these rules in RatingObj2 in the GoogleCSE2 project:
      public String junkOrGood(){
          boolean junk = true;
          if(personCharacterization.equals(INAPPROPRIATE)){
              return "junk";
          }
          if(sourceType.equals(MACHINE_GENERATED)){
              return "junk";
          }
          if(qualityCharacterization.equals(LOW) || qualityCharacterization.equals(MINIMAL))
          {
              return "junk";
          }
          if(trustworthiness.equals(NOT_CREDIBLE) || trustworthiness.equals(DISTRUSTWORTHY) || trustworthiness.equals(VERY_DISTRUSTWORTHY)){
              return "junk";
          }
          return "good";
      }
    • This shows the second pass using just the text. It turns out that the classifiers were targeting the meta information as the best predictor. And of course they were right. Pulled out the meta information and got the following. I do want to try some of the other meta information as well, like trustworthiness and see if there’s anything that makes sense. Not that this corpus is just html pages that were successfully downloaded and scanned. No MSWORD or PDF.
    • NaiveBayes:
      Time taken to build model: 0.01 seconds
      
      === Stratified cross-validation ===
      === Summary ===
      
      Correctly Classified Instances 33 78.5714 %
      Incorrectly Classified Instances 9 21.4286 %
      Kappa statistic 0.5116
      Mean absolute error 0.2143
      Root mean squared error 0.4629
      Relative absolute error 51.8311 %
      Root relative squared error 102.1856 %
      Total Number of Instances 42 
      
      === Detailed Accuracy By Class ===
      
       TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
       0.750 0.200 0.600 0.750 0.667 0.519 0.747 0.509 junk
       0.800 0.250 0.889 0.800 0.842 0.519 0.810 0.876 good
      Weighted Avg. 0.786 0.236 0.806 0.786 0.792 0.519 0.792 0.771 
      
      === Confusion Matrix ===
      
       a b <-- classified as
       9 3 | a = junk
       6 24 | b = good
    • SGD (stochastic gradient descent):
      === Stratified cross-validation ===
      === Summary ===
      
      Correctly Classified Instances 35 83.3333 %
      Incorrectly Classified Instances 7 16.6667 %
      Kappa statistic 0.637 
      Mean absolute error 0.1667
      Root mean squared error 0.4082
      Relative absolute error 40.3131 %
      Root relative squared error 90.1193 %
      Total Number of Instances 42 
      
      === Detailed Accuracy By Class ===
      
       TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
       0.917 0.200 0.647 0.917 0.759 0.660 0.858 0.617 junk
       0.800 0.083 0.960 0.800 0.873 0.660 0.858 0.911 good
      Weighted Avg. 0.833 0.117 0.871 0.833 0.840 0.660 0.858 0.827 
      
      === Confusion Matrix ===
      
       a b <-- classified as
       11 1 | a = junk
       6 24 | b = good
    • SMO (sequential minimal optimization algorithm for training a support vector classifier.):
      Time taken to build model: 0.02 seconds
      
      === Stratified cross-validation ===
      === Summary ===
      
      Correctly Classified Instances 32 76.1905 %
      Incorrectly Classified Instances 10 23.8095 %
      Kappa statistic 0.5139
      Mean absolute error 0.2381
      Root mean squared error 0.488 
      Relative absolute error 57.5901 %
      Root relative squared error 107.7131 %
      Total Number of Instances 42 
      
      === Detailed Accuracy By Class ===
      
       TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
       0.917 0.300 0.550 0.917 0.687 0.558 0.808 0.528 junk
       0.700 0.083 0.955 0.700 0.808 0.558 0.808 0.882 good
      Weighted Avg. 0.762 0.145 0.839 0.762 0.773 0.558 0.808 0.781 
      
      === Confusion Matrix ===
      
       a b <-- classified as
       11 1 | a = junk
       9 21 | b = good
    • Multilayer Perceptron took a long time but didn’t produce any results?
    • Attribute Selected Classifier – J48(Dimensionality of training and test data is reduced by attribute selection before being passed on to a classifier.)
      Time taken to build model: 1.41 seconds
      
      === Stratified cross-validation ===
      === Summary ===
      
      Correctly Classified Instances 34 80.9524 %
      Incorrectly Classified Instances 8 19.0476 %
      Kappa statistic 0.4815
      Mean absolute error 0.2238
      Root mean squared error 0.3805
      Relative absolute error 54.1364 %
      Root relative squared error 83.9928 %
      Total Number of Instances 42 
      
      === Detailed Accuracy By Class ===
      
       TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
       0.500 0.067 0.750 0.500 0.600 0.499 0.729 0.682 junk
       0.933 0.500 0.824 0.933 0.875 0.499 0.729 0.823 good
      Weighted Avg. 0.810 0.376 0.803 0.810 0.796 0.499 0.729 0.783 
      
      === Confusion Matrix ===
      
       a b <-- classified as
       6 6 | a = junk
       2 28 | b = good
    • Discussion with Aaron about the upcoming epics for machine learning. I thin ka lot of this is going to be about classifying data well for subsequent learning

Phil 9.1.16

7:00 – 4:30 ASRC

time taken to build model: 0.07 seconds

=== Stratified cross-validation ===
=== Summary ===

Correctly Classified Instances 33 78.5714 %
Incorrectly Classified Instances 9 21.4286 %
Kappa statistic 0.5116
Mean absolute error 0.2143
Root mean squared error 0.4629
Relative absolute error 51.8311 %
Root relative squared error 102.1856 %
Total Number of Instances 42 

=== Detailed Accuracy By Class ===

 TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
 0.750 0.200 0.600 0.750 0.667 0.519 0.757 0.540 junk
 0.800 0.250 0.889 0.800 0.842 0.519 0.810 0.876 good
Weighted Avg. 0.786 0.236 0.806 0.786 0.792 0.519 0.795 0.780 

=== Confusion Matrix ===

 a b <-- classified as
 9 3 | a = junk
 6 24 | b = good