Phil 11.3.17

7:00 – ASRC MKT

  • Good comments from Cindy on yesterday’s work
  • Facebook’s 2016 Election Team Gave Advertisers A Blueprint To A Divided US
  • Some flocking activity? AntifaNov4
  • I realized that I had not added the herding variables to the Excel output. Fixed.
  • DINH Q. LÊ: South China Sea Pishkun
    • In his new work, South China Sea Pishkun, Dinh Q. Lê references the horrifying events that occurred on April 30th 1975 (the day Saigon fell) as hundreds of thousands of people tried to flee Saigon from the encroaching North Vietnamese Army and Viet Cong. The mass exodus was a “Pishkun” a term used to describe the way in which the Blackfoot American Indians would drive roaming buffalo off cliffs in what is known as a buffalo jump.
  • Back to writing – got some done, mostly editing.
  • Stochastic gradient descent with momentum
  • Referred to in this: There’s No Fire Alarm for Artificial General Intelligence
    •  AlphaGo did look like a product of relatively general insights and techniques being turned on the special case of Go, in a way that Deep Blue wasn’t. I also updated significantly on “The general learning capabilities of the human cortical algorithm are less impressive, less difficult to capture with a ton of gradient descent and a zillion GPUs, than I thought,” because if there were anywhere we expected an impressive hard-to-match highly-natural-selected but-still-general cortical algorithm to come into play, it would be in humans playing Go.
  • In another article: The AI Alignment Problem: Why It’s Hard, and Where to Start
    • This is where we are on most of the AI alignment problems, like if I ask you, “How do you build a friendly AI?” What stops you is not that you don’t have enough computing power. What stops you is that even if I handed you a hypercomputer, you still couldn’t write the Python program that if we just gave it enough memory would be a nice AI.
    • I think this is where models of flocking and “healthy group behaviors” matters. Explore in small numbers is healthy – it defines the bounds of the problem space. Flocking is a good way to balance bounded trust and balanced awareness. Runaway echo chambers are very bad. These patterns are recognizable, regardless of whether they come from human, machine, or bison.
  • Added contacts and invites. I think the DB is ready: polarizationgameone
  • While out riding, I realized what I can do to show results in the herding paper. There are at least three ways to herd:
    1. No herding
    2. Take the average of the herd
    3. Weight a random agent
    4. Weight random agents (randomly select an agent and leave it that way for a few cycles, then switch
  • Look at the times it takes for these to converge and see which one is best. Also look at the DTW to see if they would be different populations.
  • Then re-do the above for the two populations inverted case (max polarization)
  • Started to put in the code changes for the above. There is now a combobox for herding with the above options.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.