Category Archives: research

Phil 3.10.19

Learning to Speak and Act in a Fantasy Text Adventure Game

  • We introduce a large scale crowdsourced text adventure game as a research platform for studying grounded dialogue. In it, agents can perceive, emote, and act whilst conducting dialogue with other agents. Models and humans can both act as characters within the game. We describe the results of training state-of-the-art generative and retrieval models in this setting. We show that in addition to using past dialogue, these models are able to effectively use the state of the underlying world to condition their predictions. In particular, we show that grounding on the details of the local environment, including location descriptions, and the objects (and their affordances) and characters (and their previous actions) present within it allows better predictions of agent behavior and dialogue. We analyze the ingredients necessary for successful grounding in this setting, and how each of these factors relate to agents that can talk and act successfully.

New run in the dungeon. Exciting!

Finished my pass through Antonio’s paper

Zoe Keating (May 1) or Imogen Heap (May 3)?

Phil 3.4.19

7:00 – 5:00 ASRC

  • Build an interactive SequenceAnalyzer. The adjustments are
    • Number of buckets
    • Percentages for each analytic (percentages to keep/discard
    • Selectable skip words that can be added to a list (in the db?)
  • Algorithm
    1. Find the most common words across all groups, these are skip_words
    2. Find the most common words along the entire series of posts per player and eliminate them
    3. Find the most common/central words across all sequences and keep those as belief places
    4. For each sequence by group, find the most common/central words after the belief places. These are the belief spaces.
    5. Build an adjacency matrix of players, groups, places and spaces
    6. Build submatrices for centrality calculations? This could be rather than finding the most common
    7. Possible word2vec variations?
      1. It seems to me that I might be able to use direction cosines and dynamic time warping to calculate the similarity of posts and align them better than the overall scaling that I’m doing now. DM posts introducing a room should align perfectly, and then other scaling could happen between those areas of greatest alignment
  • Display
    • Menu:
      • Save spreadsheet (includes config, included words, posts(?), trajectories)
      • load data
      • select database
      • select group within db
      • load/save config file
      • clear all
    • Fields
      • percent for A1, A2, A3, A4
      • Centrality/Sum switch
      • BOW/TF-IDF switch
      • Word2vec switch?
    • Textarea (areas? tabbed?)
      • Table with rows as sequence step. Columns are grouped by places, spaces, groups, and players
    • Work on Antonio’s paper got a first draft on introduction and motivation
    • BAA
      • Upload latex and references to laptop
    • Haircut! Pack!
    • Model-Based Reinforcement Learning for Atari
      • Model-free reinforcement learning (RL) can be used to learn effective policies for complex tasks, such as Atari games, even from image observations. However, this typically requires very large amounts of interaction — substantially more, in fact, than a human would need to learn the same games. How can people learn so quickly? Part of the answer may be that people can learn how the game works and predict which actions will lead to desirable outcomes. In this paper, we explore how video prediction models can similarly enable agents to solve Atari games with orders of magnitude fewer interactions than model-free methods. We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting. Our experiments evaluate SimPLe on a range of Atari games and achieve competitive results with only 100K interactions between the agent and the environment (400K frames), which corresponds to about two hours of real-time play.

 

Phil 3.3.19

Once more, icky weather makes me productive

  • Ingested all the runs into the db. We are at 7,246 posts
  • Reworking the 5 bucket analysis
  • Building better ignore files and rebuilding bucket spreadsheets. It tuns out that for tymora1, names took up 25% of the BOW, so I increased the fraction saved to the trimmed spreadsheets to 50%
  • Building bucket spreadsheets and saving the centrality vector
  • Here’s what I’ve got so far: ThreeRuns
  • Trajectories: Trajectories
  • First map: firstMap
  • Here it is annotated: firstMapAnnotated
  • Some thoughts. I think this is still “zoomed out” too far. Changing the granularity should help some. I need to automate some of my tools though. The other issue is how I’m assembling my sequences.

Phil 3.2.19

Updating SheetToMap to take comma separated cell names. Lines 180 – 193. I think I’ll need an iterating compare function. Nope, wound up doing something simpler

for (String colName : colNames) {
    String curCells = tm.get(colName);
    String[] cellArray = curCells.split("\\|\\|"); <--- new!
    for(String curCell : cellArray) {
        addNode(curCell, rowName);
        if (prevCell != null && !curCell.equals(prevCell)) {
            String edgeName = curCell + "+" + prevCell;
            if (graph.getEdge(edgeName) == null) {
                try {
                    graph.addEdge(edgeName, curCell, prevCell);
                    System.out.println("adding edge [" + edgeName + "]");
                } catch (EdgeRejectedException e) {
                    System.out.println("didn't add edge [" + edgeName + "]");
                }
            }
        }
        prevCell = curCell;
    }

    //System.out.print(curCell + ", ");
    prevCell = cellArray[0];
    col++;
}

Updating GPM to generate comma separated cell names in trajectories

  • need to get the previous n cell names
  • Need to change the cellName val in FlockingBeliefCA to be a stack of tail length. Done.
  • Parsed the strings in SheetToMap. Each cell has a root name (the first) which connects to the roots of the previous cell. The root then links to the subsequent names in the chain of names that are separated by “||”
    "cell_[4, 5]||cell_[4, 4]||cell_[4, 3]||cell_[4, 2]||cell_[4, 1]"
  • Seems to be working: tailtest

Phil 3.1.19

7:00 – ASRC

  • Got accepted to the TF dev conference. The flight out is expensive… Sent Eric V. a note asking for permission to go, but bought tix anyway given the short fuse
  • Downloaded the full slack data
  • Working on white paper. The single file was getting unwieldy, so I broke it up
  • Found Speeding up Parliamentary Decision Making for Cyber Counter-Attack, which argues for the possibility of pre-authorizing automated response
  • Up to six pages. IN the middle of the cyberdefense section

Phil 2.28.19

7:00 – very, very, late ASRC

  • Tomorrow is March! I need to write a few paragraphs for Antonio this weekend
  • YouTube stops recommending alt-right channels
    • For the first two weeks of February, YouTube was recommending videos from at least one of these major alt-right channels on more than one in every thirteen randomly selected videos (7.8%). From February 15th, this number has dropped to less than one in two hundred and fifty (0.4%).
  • Working on text splitting Group1 in the PHPBB database
    • Updated the view so the same queries work
    • Discovered that you can do this: …, “message” as type, …. That gives you a column of type filled with “message”. Via stackoverflow
    • Mostly working, I’m missing the last bucket for some reason. But it’s good overlap with the Slack data.
    • Was debugging on my office box, and was wondering where all the data after the troll was! Ooops, not loaded
    • Changed the time tests to be > ts1 and <= ts2
  • Working on the white paper. Deep into strategy, Cyberdefense, and the evolution towards automatic active response in cyber.
  • Looooooooooooooooooooooooooong meeting of Shimei’s group. Interesting but difficult paper: Learning Dynamic Embeddings from Temporal Interaction Networks
  • Emily’s run in the dungeon finishes tonight!
  • Looks like I’m going to the TF Dev conference after all….

Phil 2.27.19

7:00 – 5:30 ASRC

  • Getting closer to the goal by being less capable
    • Understanding how systems with many semi-autonomous parts reach a desired target is a key question in biology (e.g., Drosophila larvae seeking food), engineering (e.g., driverless navigation), medicine (e.g., reliable movement for brain-damaged individuals), and socioeconomics (e.g., bottom-up goal-driven human organizations). Centralized systems perform better with better components. Here, we show, by contrast, that a decentralized entity is more efficient at reaching a target when its components are less capable. Our findings reproduce experimental results for a living organism, predict that autonomous vehicles may perform better with simpler components, offer a fresh explanation for why biological evolution jumped from decentralized to centralized design, suggest how efficient movement might be achieved despite damaged centralized function, and provide a formula predicting the optimum capability of a system’s components so that it comes as close as possible to its target or goal.
  • Nice chat with Greg last night. He likes the “Bones in a Hut” and “Stampede Theory” phrases. It turns out the domains are available…
    • Thinking that the title of the book could be “Stampede Theory: Why Group Think Happens, and why Diversity is the First, Best Answer”. Maybe structure the iConference talk around that as well.
  • Guidance from Antonio: In the meantime, if you have an idea on how to structure the Introduction, please go on considering that we want to put the decision logic inside each Autonomous Car that will be able to select passengers and help them in a self-organized manner.
  • Try out the splitter on the Tymora1 text.
    • Incorporate the ignore.xml when reading the text
    • If things look promising, then add changes to the phpbb code and try on that text as well.
    • At this point I’m just looking at overlapping lists of words that become something like a sand chart. I wonder if I can use the Eigenvector values to become a percentage connectivity/weight? Weights
    • Ok – I have to say that I’m pretty happy with this. These are centrality using top 25% BOW from the Slack text of Tymora1. I think that the way to use this is to have each group be an “agent” that has cluster of words for each step: Top 10
    • Based on this, I’d say add a “Evolving Networks of words” section to the dissertation. Have to find that WordRank paper
  • Working on white paper. Lit review today, plus fix anything that I might have broken…
    • Added section on cybersecurity that got lost in the update fiasco
    • Aaron found a good paper on the lack of advantage that the US has in AI, particularly wrt China
  • Avoiding working on white paper by writing a generator for Aaron. Done!
  • Cortex is an open-source platform for building, deploying, and managing machine learning applications in production. It is designed for any developer who wants to build machine learning powered services without having to worry about infrastructure challenges like configuring data pipelines, continuous deployment, and dependency management. Cortex is actively maintained by Cortex Labs. We’re a venture-backed team of infrastructure engineers and we’re hiring.

Phil 2.26.19

7:00 – 3:00 ASRC

    • Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of Web development, so you can focus on writing your app without needing to reinvent the wheel. It’s free and open source.
    • More white paper. Add Flynn’s thoughts about cyber security – see notes from yesterday
    • Reconnected with Antonio. He’d like me to write the introduction and motivation for his SASO paper
    • Add time bucketing to postanalyzer. I’m really starting to want to add a UI
      • Looks done. Try it out next time
        Running query for Poe in subject peanutgallery between 23:56 and 00:45
        Running query for Dungeon Master in subject peanutgallery between 23:56 and 00:45
        Running query for Lord Javelin in subject peanutgallery between 23:56 and 00:45
        Running query for memoriesmaze in subject peanutgallery between 23:56 and 00:45
        Running query for Linda in subject peanutgallery between 23:56 and 00:45
        Running query for phil in subject peanutgallery between 23:56 and 00:45
        Running query for Lorelai in subject peanutgallery between 23:56 and 00:45
        Running query for Bren'Dralagon in subject peanutgallery between 23:56 and 00:45
        Running query for Shelton Herrington in subject peanutgallery between 23:56 and 00:45
        Running query for Keiri'to in subject peanutgallery between 23:56 and 00:45
    • More white paper. Got through the introduction and background. Hopefully didn’t loose anything when I had to resynchronize with the repository that I hadn’t updated from

 

Phil 2.25.19

7:00 – 2:30 ASRC TL

2:30 – 4:30 PhD

  • Fix directory code of LMN so that it remembers the input and output directories – done
  • Add time bucketing capabilities. Do this by taking the complete conversation and splitting the results into N sublists. Take the beginning and ending time from each list and then use those to set the timestamp start and stop for each player’s posts.
  • Thinking about a time-series LMN tool that can chart the relative occurrence of the sorted terms over time. I think this could be done with tkinter. I would need to create and executable as described here, though the easiest answer seems to be pyinstaller.
  • Here are two papers that show the advantages of herding over nomadic behavior:
    • Phagotrophy by a flagellate selects for colonial prey: A possible origin of multicellularity
      • Predation was a powerful selective force promoting increased morphological complexity in a unicellular prey held in constant environmental conditions. The green alga, Chlorella vulgaris, is a well-studied eukaryote, which has retained its normal unicellular form in cultures in our laboratories for thousands of generations. For the experiments reported here, steady-state unicellular C. vulgaris continuous cultures were inoculated with the predator Ochromonas vallescia, a phagotrophic flagellated protist (‘flagellate’). Within less than 100 generations of the prey, a multicellular Chlorella growth form became dominant in the culture (subsequently repeated in other cultures). The prey Chlorella first formed globose clusters of tens to hundreds of cells. After about 10–20 generations in the presence of the phagotroph, eight-celled colonies predominated. These colonies retained the eight-celled form indefinitely in continuous culture and when plated onto agar. These self-replicating, stable colonies were virtually immune to predation by the flagellate, but small enough that each Chlorella cell was exposed directly to the nutrient medium.
    • De novo origins of multicellularity in response to predation
      • The transition from unicellular to multicellular life was one of a few major events in the history of life that created new opportunities for more complex biological systems to evolve. Predation is hypothesized as one selective pressure that may have driven the evolution of multicellularity. Here we show that de novo origins of simple multicellularity can evolve in response to predation. We subjected outcrossed populations of the unicellular green alga Chlamydomonas reinhardtii to selection by the filter-feeding predator Paramecium tetraurelia. Two of five experimental populations evolved multicellular structures not observed in unselected control populations within ~750 asexual generations. Considerable variation exists in the evolved multicellular life cycles, with both cell number and propagule size varying among isolates. Survival assays show that evolved multicellular traits provide effective protection against predation. These results support the hypothesis that selection imposed by predators may have played a role in some origins of multicellularity. SpontaniousClustering\

Phil 2.13.19

7:00 – 7:00 ASRC IRAD TL

  • The Digital Clockwork Muse: A Computational Model of Aesthetic Evolution
    • This paper presents a computational model of creativity that attempts to capture within a social context an important aspect of the art and design process: the search for novelty. The computational model consists of multiple novelty-seeking agents that can assess the interestingness of artworks. The agents can communicate to particularly interesting artworks to others. Agents can also communicate to reward other agents for finding interesting artworks. We present the results from running experiments to investigate the effects of searching for different degrees of novelty on the artworks produced and the social organisation of the agents.
  • Upload the rest of Slack Tymora.
  • Create some txt files and feed into LMN. I’m thinking of by player and then by slice. Do this for both PHPBB and Slack data. Other ideas
    • Look into coherence measures
    • Are there economic models of attention? (ArXive)
    • TAACO is an easy to use tool that calculates 150 indices of both local and global cohesion, including a number of type-token ratio indices (including specific parts of speech, lemmas, bigrams, trigrams and more), adjacent overlap indices (at both the sentence and paragraph level), and connectives indices.
    • CRAT is an easy to use tool that includes over 700 indices related to lexical sophistication, cohesion and source text/summary text overlap. CRAT is particularly well suited for the exploration of writing quality as it relates to summary writing.
    •  TAALED is an analysis tool designed to calculate a wide variety of lexical diversity indices. Homographs are disambiguated using part of speech tags, and indices are calculated using lemma forms. Indices can also be calculated using all lemmas, content lemmas, or function lemmas. Also available is diagnostic output which allows the user to see how TAALED processed each word.
    • TAALES is a tool that measures over 400 classic and new indices of lexical sophistication, and includes indices related to a wide range of sub-constructs.  TAALES indices have been used to inform models of second language (L2) speaking proficiency, first language (L1) and L2 writing proficiency, spoken and written lexical proficiency, genre differences, and satirical language.
    • SEANCE is an easy to use tool that includes 254 core indices and 20 component indices based on recent advances in sentiment analysis. In addition to the core indices, SEANCE allows for a number of customized indices including filtering for particular parts of speech and controlling for instances of negation.
    • TAASSC is an advanced syntactic analysis tool. It measures a number of indices related to syntactic development. Included are classic indices of syntactic complexity (e.g., mean length of T-unit) and fine-grained indices of phrasal (e.g., number of adjectives per noun phrase) and clausal (e.g., number of adverbials per clause) complexity. Also included are indices that are grounded in usage-based perspectives to language acquisition that rely on frequency profiles of verb argument constructions.
    • GAMET is an easy to use tool that provides incidence counts for structural and mechanics errors in texts including grammar, spelling, punctuation, white space, and repetition errors. The tool also provides line output for the errors flagged in the text.
    • Comparison of Top 6 Python NLP Libraries
      • NLTK (Natural Language Toolkit) is used for such tasks as tokenization, lemmatization, stemming, parsing, POS tagging, etc. This library has tools for almost all NLP tasks.
      • Spacy is the main competitor of the NLTK. These two libraries can be used for the same tasks.
      • Scikit-learn provides a large library for machine learning. The tools for text preprocessing are also presented here.
      • Gensim is the package for topic and vector space modeling, document similarity.
      • The general mission of the Pattern library is to serve as the web mining module. So, it supports NLP only as a side task.
      • Polyglot is the yet another python package for NLP. It is not very popular but also can be used for a wide range of the NLP tasks.
  • Continuing writing Clockwork Muse review
  • Reading Attachment 1 to BAA FA8750-18-S-7014. “While white papers will be considered if received prior to 6:00 PM Eastern Standard Time (EST) on 30 Sep 2022, the following submission dates are suggested to best align with projected funding:” 
    • FY20 – 15 April 2019
  • AIMS/ML Meeting. Not sure what the outcome was, other than folks are covered for this quarter?
  • Long, wide ranging meeting with Wayne at Frisco’s. Gave him an account on Antibubbles.com. And it seems like we won first place for Blue Sky papers?

Phil 2.12.19

7:00 – 4:30 ASRC IRAD

  • Talked with Eric yesterday. going to write up a white paper about teachable AI. Two-three week effort
  • Speaking of which, The Evolved Transformer
    • Recent works have highlighted the strengths of the Transformer architecture for dealing with sequence tasks. At the same time, neural architecture search has advanced to the point where it can outperform human-designed models. The goal of this work is to use architecture search to find a better Transformer architecture. We first construct a large search space inspired by the recent advances in feed-forward sequential models and then run evolutionary architecture search, seeding our initial population with the Transformer. To effectively run this search on the computationally expensive WMT 2014 English-German translation task, we develop the progressive dynamic hurdles method, which allows us to dynamically allocate more resources to more promising candidate models. The architecture found in our experiments – the Evolved Transformer – demonstrates consistent improvement over the Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French, WMT 2014 English-Czech and LM1B. At big model size, the Evolved Transformer is twice as efficient as the Transformer in FLOPS without loss in quality. At a much smaller – mobile-friendly – model size of ~7M parameters, the Evolved Transformer outperforms the Transformer by 0.7 BLEU on WMT’14 English-German.
  • Finished running Tymora1 on Slack. Downloaded, though the download didn’t include research_notes. Hmmm. Looks like I can’t make it public, either.
  • Thinking about writing a tagging app, possibly with a centrality capability.
  • Started on the Teachable AI paper. The rough outline is there, and I have a good set of references.

Phil 2.11.19

7:00 – 5:00 ASRC IRAD (TL)

  • Gen Studio is a way to navigate between designs in latent space. It is a prototype concept which was created over a two-day hackathon with collaborators across The Met, Microsoft, and MIT.
  • Write up Clockwork Muse
  • Continue with parsing, storing and report generation of slack data. Aaron had the idea that multiple statements by one person should be combined into a single post. Need to think about how that works in the report generation. Since the retrieved list is ordered by timestamp, the naive implementation is to accumulate text into a single post as long as the same person is “talking”
  • Pinged back to Panos about JuryRoom. The original email evaporated, so I tried again…
  • Setting up a meeting with Wayne for Wednesday?
  • Fika – nope, meeting with Eric instead. The goal is to write up a whitepaper for human in the loop AI

Phil 2.8.19

7:00 – 6:00 ASRC IRAD TL

  • Need to ping Eric about tasking. Suggest time series prediction. Speaking of which, Transformers (post 1 and post 2) may be much better than LSTMs for series prediction.
    • The Transformer model in Attention is all you need:a Keras implementation.
      • A Keras+TensorFlow Implementation of the Transformer: “Attention is All You Need” (Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, arxiv, 2017)
    • keras-transformer 0.17.0
      • Implementation of transformer for translation-like tasks.
    • The other option is “teachable” ML systems using evolution. There is a lot of interesting older work in this area:
      • Particle swarms for feedforward neural network training
      • Evolving artificial neural networks
      • Training Feedforward Neural Networks Using Genetic Algorithms.
        • Multilayered feedforward neural networks possess a number of properties which make them particularly suited to complex pattern classification problems. However, their application to some real world problems has been hampered by the lack of a training algorithm which reliably finds a nearly globally optimal set of weights in a relatively short time. Genetic algorithms are a class of optimization procedures which are good at exploring a large and complex space in an intelligent way to find values close to the global optimum. Hence, they are well suited to the problem of training feedforward networks. In this paper, we describe a set of experiments performed on data from a sonar image classification problem. These experiments both 1) illustrate the improvements gained by using a genetic algorithm rather than backpropagation and 2) chronicle the evolution of the performance of the genetic algorithm as we added more and more domain-specific knowledge into it.
  • Add writing to the db from within the program, download the latest slack bundle, and try storing it!
  • Read in test-dungeon-1 and realized that there is no explicit link between the channel and the message in the data, so I added fields for the current directory and the current file
  • Ok, everything seems to be working. I had a few trips around the block getting a unique id for messages, but that seems ok now.
  • Created view(s), where I learned how to use conditionals and was happy:
    SELECT * FROM t_message;
    SELECT * FROM t_message_files;
    
    CREATE or REPLACE VIEW user_view AS
    SELECT u.id, p.email, p.real_name,
           (CASE WHEN p.display_name > '' THEN p.display_name ELSE u.name END) as username
    FROM t_user u
           INNER JOIN t_user_profile p ON u.id = p.parent_id;
    
    select * from user_view;
    
    CREATE or REPLACE VIEW post_view AS
    SELECT FROM_UNIXTIME(p.ts) as post_time, p.dirname as post_topic, p.text as post_text, u.username,
           (CASE WHEN p.subtype > '' THEN p.subtype ELSE p.type END) as type
    FROM t_message p
           INNER JOIN user_view u ON p.user = u.id;
    
    select * from post_view order by post_time limit 1000;

     

  • Need to put together a strawman invitation that also has checkboxes for BB-based and/or Slack-based preferences and why a user might choose one over the other. Nope, not yet
  • Got the Slack academic discount!

Phil 2.7.19

7:00 – 7:00 ASRC IRAD TL

  • Continuing with Slack to DB process. I should be done with channels, and now I need to get conversations done.
    • The secondary tables that point to the primary user and conversation tables and the tertiary tables that point at them need to be looked at based on what happens when we go past the 10k limit (assuming that I can’t get the discount on the Standard Plan). REPLACE INTO won’t work with an auto incrementing primary key
    • Got all the parts working, now I need to automate and try out on Tymora1
    • Need to write up a letter for Don to sign – done
    • I think Emily is having a run tonight? Nope
    • Added a research_notes section to Slack for Aaron and I right now. I think I’ll invite Wayne as well – done! Need to know
  • Submitted expenses for TL trip
  • Was officially not invited to the TF dev conf
  • Shimei’s group meeting 4:30 – 7:00

Phil 2.6.19

7:00 – 5:00 ASRC IRAD (TL)

  • The role of maps in spatial knowledge acquisition
    • The Cartographic Journal
    • One goal of cartographic research is to improve the usefulness of maps. To do so, we must consider the process of spatial knowledge acquisition, the role of maps in that process, and the content of cognitive representations derived. Research from psychology, geography, and other disciplines related to these issues is reviewed. This review is used to suggest potential new directions for research with particular attention to spatial problem solving and geographic instruction. A classroom experiment related to these issues is then described. The experiment highlights some of the implications that a concern for the process of spatial knowledge acquisition will have on questions and methods of cartographic research as well as on the use of maps in geographic instruction. It also provides evidence of independent but interrelated verbal and spatial components of regional images that can be altered by directed map work.
  • It’s Not A Lie If You Believe It: Lying and Belief Distortion Under Norm-Uncertainty
    • This paper focuses on norm-following considerations as motivating behavior when lying opportunities are present. To obtain evidence on what makes it harder/easier to lie, we hypothesize that subjects might use belief-manipulation in order to justify their lying. We employ a two-stage variant of a cheating paradigm, in which subjects’ beliefs are elicited in stage 1 before performing the die task in stage 2. In stage 1: a) we elicit the subjects’ beliefs about majoritarian (i) behavior or (ii) normative beliefs in a previous session, and b) we vary whether participants are (i) aware or (ii) unaware of the upcoming opportunity to lie. We show that belief manipulation happens, and takes the form of people convincing themselves that lying behavior is widespread. In contrast with beliefs about the behavior of others, we find that beliefs about their normative convictions are not distorted, since believing that the majority disapproves of lying does not inhibit own lying. These findings are consistent with a model where agents are motivated by norm-following concerns, and honest behavior is a strong indicator of disapproval of lying but disapproval of lying is not a strong indicator of honest behavior. We provide evidence that supports this hypothesis.
  • Sent a note to Slack, asking for an academic plan. They do, and there are forms to fill out. I need to send Don some text that he can send back to me on letterhead.
  • Looks like I’m not going to the TF Dev Conf this year…
  • Continuing with the INSERT code
  • Meeting in Greenbelt to discuss… what, exactly?
  • Got a cool book: A Programmer’s Introduction to Mathematics
  • Got my converter creating error-free sql! t_user
  • Working on reading channel data into the db. Possibly done, but I’m afraid to run it so late in the day. I have chores!
  • Reviewing proposal for missing citations – done