Category Archives: Phil

Phil 5.26.19

Tikkun olam (Hebrew for “world repair”) has come to connote social action and the pursuit of social justice. The phrase has origins in classical rabbinic literature and in Lurianic kabbalah, a major strand of Jewish mysticism originating with the work of the 16th-century kabbalist Isaac Luria.

Cooperation in large-scale human societies — What, if anything, makes it unique, and how did it evolve?

  • There is much controversy about whether the cooperative behaviours underlying the functioning of human societies can be explained by individual self-interest. Confusion over this has frustrated the understanding of how large-scale societies could ever have evolved and be maintained. To clarify this situation, we here show that two questions need to be disentangled and resolved. First, how exactly do individual social interactions in small- and large-scale societies differ? We address this question by analysing whether the exchange and collective action dilemmas in large-scale societies differ qualitatively from those in small-scale societies, or whether the difference is only quantitative. Second, are the decision-making mechanisms used by individuals to choose their cooperative actions driven by self-interest? We address this question by extracting three types of individual decision-making mechanism (three type of “minds”) that have been assumed in the literature, and compare the extent to which these decision-making mechanisms are sensitive to individual material payoff. After addressing the above questions, we ask: what was the key change from other primates that allowed for cooperative behaviours to be maintained as the scale of societies grew? We conclude that if individuals are not able to refine the social interaction mechanisms underpinning cooperation, i.e change the rules of exchange and collective action dilemmas, then new mechanisms of transmission of traits between individuals are necessary. Examples are conformity-biased or prestige-biased social learning, as stressed by the cultural group selection hypothesis. But if individuals can refine and adjust their social interaction mechanisms, then no new transmission mechanisms are necessary and cooperative acts can be sustained in large-scale societies entirely by way of self-interest, as stressed by the institutional path hypothesis. Overall, our analysis contributes to the theoretical foundation of the evolution of human social behaviour.

Phil 5.24.19

7:00 – 3:30 ASRC GEOS

Phil5.23.19

7:00 – 5:00 ASRC GEOS

  • Saw 4×3000 with David and Roger last night. The VR lab seems to be a thing. Need to go down and have a chat, possibly about lists, stories, maps and games
  • Found the OECD Principles on Artificial Intelligence. I haven’t had a chance to actually *read* any of it, but I did create a “Treaty Lit” folder in the Sanhedrin folder and put pdf versions of them. I ran my tool over them and got the following probe:
    • state cyber cybercrime agree united china mechanism
  • Putting that into Google Scholar returns some good hits as well, though I haven’t gotten a chance to do anything beyond that.
  • JASSS paper
    • Changing “consensus” to “alignment”, and breaking many paragraphs. I think the setup of the space in the introduction is better now.
  • Got caught up on NESDIS. Worked some on the slide deck, which I finally got back. Scheduled a walkthrough with T tomorrow
  • GEOS AI/ML meeting at NSOF. Still trying to figure out roles and responsibilities. I think the Sanhedrin concept will help Bruce formalize our contributions.

Phil 5.16.18

7:00 – 9:00 ASRC GEOS/AIMES

  • Worked on the slides a bit
  • Adding changes to the JASSS paper
  • Waiting for meeting
  • Meeting went well, I think? Funding appears to be solid, and I’m now a “Futurist”
  • Meeting with Shimei’s group. Fatima might be interested in ML summer work
  • Meeting with Aaron. Fleshed out the Sanhedrin-17a concept

Phil 5.14.19

7:00 – 8:00 ASRC NASA GEOS-R

  • More Dissertation
  • Break out the network slides to “island” (initial state), “star” (radio) “cyclic star” (talk radio), “dense” social media
  • MatrixScalar
  • 7:30 Waikato meeting.
    • Walked through today’s version, which is looking very nice
    • Went over tasking spreadsheets

Phil 5.13.19

7:00 – 3:00 ASRC NASA GEOS-R

Phil 5.10.19

7:00 – 4:00 ASRC NASA GOES

  • Tensorflow Graphics? TF-Graphics
  • An End-to-End AutoML Solution for Tabular Data at KaggleDays
  • More dissertation writing. Added a bit on The Sorcerer’s Apprentice and finished my first pass at Moby-Dick
  • Add pickling to MatrixScalar – done!
    def save_class(the_class, filename:str):
        print("save_class")
        # Its important to use binary mode
        dbfile = open(filename, 'ab')
    
        # source, destination
        pickle.dump(the_class, dbfile)
        dbfile.close()
    
    
    def restore_class(filename:str) -> MatrixScalar:
        print("restore_class")
        # for reading also binary mode is important
        dbfile = open(filename, 'rb')
        db = pickle.load(dbfile)
        dbfile.close()
        return db
  • Added flag to allow unlimited input buffer cols. It automatically sizes to the max if no arg for input_size
  • NOTE: Add a “notes” dict that is added to the setup tab for run information

 

Phil 5.9.19

Finished Army of None. One of the deepest, thorough analysis of human-centered AI/ML I’ve ever read.

7:00 – 4:00 ASRC NASA GOES-R

  • Create spreadsheets for tasks and bugs
  • More dissertation. Add Axelrod
  • Add reading and saving of matrices
    • Well, I can write everything, but xlsxwriter won’t read in anything
    • Tomorrow add pickling
  • Price to win analytic?

4:30 – 7:00 ML Seminar

7:00 – 9:00 Meeting with Aaron M

  • Tried to get biber working, but it produces a blank bib file. Need to look into that
  • Got the AI paper uploaded to Aaron’s new account. Arxiv also has problems with biber
  • Spent the rest of the meeting figuring out the next steps. It’s potentially something along the lines of using ML to build an explainable model for different sorts of ML systems (e.g. Humans-on-the-loop <-> Forensic, post-hoc interaction)

Phil 5.8.19

7:00 – 5:00 ASRC NASA GOES-R

  • Create spreadsheets for tasks and bugs
  • More dissertation. Add Axelrod
  • Mission Drive today, no meeting with Wayne, I think
  • Good visualization tool finds this morning:
    • Altair: Declarative Visualization in Python
    • deck.gl is a WebGL-powered framework for visual exploratory data analysis of large datasets.
  • Matrix Class
    • Change test and train within the class to input and target
    • Create the test and train as an output with the rectangular matrix with masks included. This means that I have to re-assemble that matrix from the input and target matrices
    • I still like the idea of persisting these values as excel spreadsheets
    • And now a pile of numbers that makes me happy:
      MatrixScalar:
      	rows = 5
      	input_size = 5
      	target_size = 5
      	mask_value(hex) = -1
      	tmax_cols = 6
      	mat_min = 0.13042279608514273
      	mat_max = 9.566827711787509
      
      input_npmat = 
      [4.384998306058251, 6.006494724381491, 7.061283542583833, 7.817876758859971, 7.214499436254831]
      [0.15061642402352393, 2.818956354589415, 5.04113793598655, 6.31250083574919]
      [2.8702355283795837, 5.564035171373476, 7.81403258383623, 8.590265450278785, 9.566827711787509]
      [0.1359688602006689, 0.8005043254115471, 2.080391037187722, 1.9828746089685887, 2.4669996344853677]
      [0.33676501126574077]
      
      target_npmat = 
      [6.529725859535821, 4.8702784287160075, 3.677355933557321, 1.5184287945320327, -0.5429800453619322]
      [7.629655798004273, 8.043579124885415, 7.261429015491849, 7.137935661381686, 5.583232751491164]
      [8.997538924797388, 8.32502866049641, 6.5215023090524085, 4.725363596736856, 1.3761131232325439]
      [2.270623038824647, 2.430147101210101, 2.0903103552937132, 1.6846416494136842, 1.4289540998497225]
      [1.897999998722116, 1.9054555934093833, 2.883358420829866, 3.703791108487346, 4.011103843736698]
      
      scaled_input_npmat = 
      [0.5608937619909073, 0.7683025595887466, 0.9032226729055693, 0.9999999999999999, 0.9228208193584869]
      [0.023860024409113324, 0.44656728417761093, 0.798596002940291, 1.0]
      [0.30001956916639155, 0.5815966733171017, 0.8167840813322457, 0.8979220394754807, 1.0]
      [0.055115071076624986, 0.32448497933342324, 0.8432879389630332, 0.8037595876588889, 1.0]
      [1.0]
      
      scaled_target_npmat = 
      [0.8352300836842569, 0.6229668973991612, 0.47037783364770835, 0.1942252150254483, -0.06945364606145459]
      [1.2086581842168989, 1.2742301877146247, 1.1503252362944103, 1.1307619352630993, 0.88447239798718]
      [0.9404934630223677, 0.8701974062142825, 0.6816786614665502, 0.4939321308059752, 0.143842155904721]
      [0.9203986117729256, 0.9850618002693972, 0.847308741385268, 0.6828706522143898, 0.5792275279958895]
      [5.635977418165948, 5.658116281877606, 8.561929904750741, 10.998147030080538, 11.910690569251393]
      
      scaled, masked input = 
      [[ 0.56089376  0.76830256  0.90322267  1.          0.92282082]
       [-1.          0.02386002  0.44656728  0.798596    1.        ]
       [ 0.30001957  0.58159667  0.81678408  0.89792204  1.        ]
       [ 0.05511507  0.32448498  0.84328794  0.80375959  1.        ]
       [-1.         -1.         -1.         -1.          1.        ]]
      scaled target = 
      [0.8352300836842569, 0.6229668973991612, 0.47037783364770835, 0.1942252150254483, -0.06945364606145459]
      [1.2086581842168989, 1.2742301877146247, 1.1503252362944103, 1.1307619352630993, 0.88447239798718]
      [0.9404934630223677, 0.8701974062142825, 0.6816786614665502, 0.4939321308059752, 0.143842155904721]
      [0.9203986117729256, 0.9850618002693972, 0.847308741385268, 0.6828706522143898, 0.5792275279958895]
      [5.635977418165948, 5.658116281877606, 8.561929904750741, 10.998147030080538, 11.910690569251393]
      
      scaled = [ 6.52972586  4.87027843  3.67735593  1.51842879 -0.54298005], error = 0.0
      scaled = [7.6296558  8.04357912 7.26142902 7.13793566 5.58323275], error = 0.0
      scaled = [8.99753892 8.32502866 6.52150231 4.7253636  1.37611312], error = 0.0
      scaled = [2.27062304 2.4301471  2.09031036 1.68464165 1.4289541 ], error = 0.0
      scaled = [1.898      1.90545559 2.88335842 3.70379111 4.01110384], error = 0.0
      
      input_train = 
      [[ 0.05511507  0.32448498  0.84328794  0.80375959  1.        ]
       [-1.         -1.         -1.         -1.          1.        ]
       [ 0.30001957  0.58159667  0.81678408  0.89792204  1.        ]]
      
      input_test = 
      [[ 0.56089376  0.76830256  0.90322267  1.          0.92282082]
       [-1.          0.02386002  0.44656728  0.798596    1.        ]]
      
      target_train = 
      [[ 0.92039861  0.9850618   0.84730874  0.68287065  0.57922753]
       [ 5.63597742  5.65811628  8.5619299  10.99814703 11.91069057]
       [ 0.94049346  0.87019741  0.68167866  0.49393213  0.14384216]]
      
      target_test = 
      [[ 0.83523008  0.6229669   0.47037783  0.19422522 -0.06945365]
       [ 1.20865818  1.27423019  1.15032524  1.13076194  0.8844724 ]]

       

Phil 5.7.19

7:00 – 8:00 ASRC NASA GOES-R

  • Via CSAIL: “The team’s approach isn’t particularly efficient now – they must train and “prune” the full network several times before finding the successful subnetwork. However, MIT professor Michael Carbin says that his team’s findings suggest that, if we can determine precisely which part of the original network is relevant to the final prediction, scientists might one day be able to skip this expensive process altogether. Such a revelation has the potential to save hours of work and make it easier for meaningful models to be created by individual programmers and not just huge tech companies.”
    • From the abstract of The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
      : We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the “lottery ticket hypothesis:” dense, randomly-initialized, feed-forward networks contain subnetworks (“winning tickets”) that – when trained in isolation – reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. 
    • Sounds like a good opportunity for evolutionary systems
  • Finished with text mods for IEEE letter
  • Added Kaufman and Olfati-Sabir to the discussion on Social Influence Horizon
  • Started the draft deck for the tech summit
  • More MatrixScalar
    • Core functions work
    • Change test and train within the class to input and target
    • Create a coordinating class that loads and creates test and train matrices
  • JuryRoom meeting
    • Progress is good enough to start tracking it. Going to create a set of Google sheets that keep track of tasks and bugs

Phil 5.6.19

7:00 – 5:00 ASRC GOES-R

  • Finished the AI/ML paper with Aaron M over the weekend. I need to have him ping me when it goes in. I think it turned out pretty well, even when cut down to 7 pages (with references!! Why, IEEE, why?)
    • Sent a copy to Wayne, and distributed around work. Need to put in on ArXiv on Thursday
  • Starting to pull parts from phifel.com to make the lit review for the dissertation. Those reviews may have had a reason after all!
    • And oddly (though satisfying), I wound up adding a section on Moby-Dick as a way of setting up the rest of the lit review
  • More Matrix scalar class. Basically a satisfying day of just writing code.
  • Need to fix IEEE letter and take a self-portrait. Need to charge up the good camera