Author Archives: pgfeldman

Phil 6.11.20

Call Simon

GPT-2 Agents

  • Embeddings and plots
  • Got the sequences generated. They look pretty cool too, like codes:
    e2 e4 c7 c5 g1 f3 b8 c6 d2 d4 c5 d4 f3 d4 g7 g6 b1 c3 f8 g7 f1 e2 d7 d6 c1 g5 a7 a6 d1 e2 f6 e8 f2 f3 e8 c7 g5 f4 f7 f5 e4 f5 g6 f5 e2 f3 c7 d5 f3 g4 c8 d7 e2 d2 d8 c7 b2 b3 f5 f4 d4 b3 d7 f5 f1 e1 e7 f5 e1 e6 f5 g4 b3 d4 a8 c8 d4 f5 c6 f5 e6 f6 f5 d4 a1 e1 g4 h5 f3 f4 f8 f6 d2 f6 c8 f8 f6 h4 d4 e6 c2 c3 e6 d4 c3 d4 h5 f3 e1 e7 f8 f7 e7 f7 g7 f7 g1 f2 b7 b5 c4 b5 a6 b5 g2 g4 f7 g6 h2 h3 g8
    e2 e4 c7 c6 d2 d4 d7 d5 b1 d2 g8 f6 f1 d3 d5 e4 d2 e4 b8 d7 g1 f3 e7 e6 d1 e2 f6 e4 d3 e4 d8 c7 e4 b1 d7 f6 c1 g5 f6 g4 h7 h6 g5 h4 c7 d7 e2 e3 g4 e5 h1 g1 e5 c6 f2 f3 f8 e7 e3 e2 g2 g4 f8 d8 f3 e5 d7 d3 e2 d3 d8 d3 g1 d1 a7 a6 c1 b1 d3 d6 b1 a1 e7 f6 a2 a3 c8 e6 f3 e4 b7 b5 b2 b4 f6 g7 b4 a5 b5 a4 e5 c6 a8 b8 d1 f1 a4 a3 c6 e5 a3 a2 h4 e1 a2 a1 f1 a1 b8 a1 d4 d5 e6 c8 d1 b1 g8 f8 a1 b1 f8 e7 b1 c2 e7 d6 e5 d7 g7 d4
    e2 e4 e7 e6 d2 d4 d7 d5 b1 c3 f8 b4 e4 e5 b4 c3 b2 c3 g8 e7 d1 b3 c7 c5 a2 a3 b8 c6 f2 f4 b7 b5 a3 a4 b5 b4 b3 b2 a4 a5 c5 d4 c3 d4 e7 g6 g1 f3 c6 e7 c8 a6 c1 g5 e7 g8 a1 b1 a8 c8 e5 d6 g8 f6 g5 f6 d8 f6 f1 f2 h7 h6 b2 b5 f6 d6 f3 h4 d6 e7 h4 g6 a6 g2 g6 e7 f8 e8 e7 f5 g2 f3 g1 g2 c8 c2 b1 c1 c2 c8 a5 a6 c8 a8 h2 h3 f3 e4 b5 b3 f7 f6 b3 b2 f6 f5 f5 d6 e6 e5 b2 a1 e8 a8 f2 f5 f5 e4 f5 f7 a8 b8 c1 f1 b8 b5 a1 a2 a8 a7
    e3 d2 d4 g8 f6 c2 c4 e7 e6 b1 c3 f8 b4 e2 e3 c1 d2 d7 d5 c4 d5 f6 d5 f2 f3 b8 c6 g1 f3 f7 f5 g2 g4 f5 g4 d1 e2 d5 f4 d2 f4 e6 f5 e2 e5 b4 c3 e5 c3 d8 d3 e1 e2 d3 e2 e2 e2 f5 f4 e2 e1 f4 f3 e1 f2 a8 d8 f2 f1 c8 e6 f4 e3 f8 f7 f3 h4 e6 d5 f1 g2 f7 f3 g2 f1 f3 h3 f1 e1 d5 e4 e1 f2 h3 h4 g4 h5 h4 h5 f2 f3 h5 f5 f3 g2 f5 h5 e3 g5 h5 h3 g5 e3 h3 h4 e3 g5 e4 g2 g2 g3 g2 d5 g3 h3 d5 e6 g5 f6 g8 h7
    e2 e4 e7 e5 g1 f3 b8 c6 b1 c3 g8 f6 f1 b5 d7 d6 d2 d3 a7 a6 f8 e7 b5 a4 b7 b5 a4 b3 h2 h3 c8 b7 a2 a4 b5 b4 a4 b5 c6 b8 c1 g5 f6 e8 f3 e5 e8 d6 g5 f4 d8 e7 f4 d6 e7 d6 e5 f3 d6 e7 c3 a4 c7 c6 a4 c5 c6 c5 d3 d4 e7 e5 d4 c5 b7 c8 a1 c1 b8 d7 f3 e5 f7 f6 e4 f5 d7 f6 c1 c6 e5 e4 g1 h1 f6 h5 f1 e1 h5 g3 e5 d7 e4 h4 c6 c4 h4 g3 d1 g4 g3 h3 h1 g1 h3 h1 g1 h2 h1 h5 g4 f5 c8 b7 c4 c5 h5 e2 e1 d1 e7 f6 b2 b3 f6 e5 c5 c8 a8 c8

     

  • Drawing the embeddings. Fun, but not really useful. And this is kind of my point about embeddings like W2V that don’t take into account the trajectory of the sentence the word is part of. We know that the structure of the board is represented in the text. We need a more sophisticated embedding to to extract it

square_embeddings

  • Something that might make sense is to see how these points cluster as well
  • I think I might try plotting individual columns later, but first I’m going to try building some from/to networks by piece

GOES

  • Try downloading yaw flip
  • Was able to connect to the server, though now I don’t need a port number?
  • Specifying the queries. Fixed a few mnemonics.
  • Had to try a few times, but I got it!

influx_copy

  • Not sure what to do next. Update GVSETS paper?
  • 2:00 CASSIE meeting – learned a lot of things
  • I got promoted!
  • Implemented a perplexity measure. Looking at this as a way of understanding mode collapse, and maybe conspiracy theories?
  • Done for the day

Phil 6.10.20

Finish ACSOS review

GPT-2 Agents

  • Generate embeddings
    • Trying running much longer sequences (max_length = 1000). The lets games run long enough  that they often conclude (the term “resigns”, “wins”, or “draw occurs in the text)
    • Put together a simple regex ‘[a-h][1-8]’ that pulls out all the squares in sequence from a game
    • Extracting game square sequences to create files that will feed into Word2Vec. The class is started and most of the issues are worked out. I added a check for game endings so beginning and endings are not place together oddly.
    • Here’s the trimmed input text
      The game begins as white uses the Sicilian opening. and black countering with Najdorf, Adams attack. Loek Van Wely moves white pawn from e2 to e4. Black moves pawn from c7 to c5. In move 2, White moves knight from g1 to f3. Black moves pawn from d7 to d6. White moves pawn from d2 to d4. Black moves pawn from c5 to d4. Black takes white pawn. White moves knight from f3 to d4. White takes black pawn. Black moves knight from g8 to f6. In move 5, White moves knight from b1 to c3. Arseniy Nesterov moves black pawn from a7 to a6. Loek Van Wely moves white bishop from c1 to e3. Black moves pawn from e7 to e6. In move 7, White moves pawn from f2 to f4. Black moves knight from b8 to d7. White moves queen from d1 to d2. Black moves pawn from b7 to b5. Loek Van Wely queenside castles. Black moves bishop from f8 to e7. White moves bishop from f1 to d3. Arseniy Nesterov kingside castles. White moves king from c1 to b1. Black moves rook from a8 to b8. White moves pawn from g2 to g3. Black moves queen from d8 to a5. Loek Van Wely moves white king from b1 to a1. Black moves bishop from e7 to d6. Black takes white knight. Loek Van Wely moves white bishop from d3 to e4. White takes black pawn. Black moves rook from b8 to b2. Black takes white pawn. In move 17, White moves bishop from e4 to h7. White takes black pawn. Check. Arseniy Nesterov moves black king from g8 to h8. White moves bishop from h7 to d3. Black moves bishop from d6 to f4. Black takes white pawn. Check. In move 19, White moves bishop from e3 to f4. White takes black bishop. Black moves rook from b2 to f2. White moves rook from h1 to f1. Black moves knight from d7 to e5. White moves queen from d2 to e2. Black moves queen from a5 to d2. In move 22, White moves knight from c3 to e2. White takes black queen. Black moves rook from f2 to e2. Black takes white knight. Loek Van Wely moves white bishop from f4 to e3. Black moves rook from e2 to e3. Black takes white bishop. White moves pawn from f4 to f5. Black moves rook from f8 to d8. White moves pawn from a2 to a4. Arseniy Nesterov moves black bishop from c8 to b7. White moves pawn from a4 to a5. Arseniy Nesterov moves black bishop from b7 to c8. White moves pawn from a5 to b6. White takes. Arseniy Nesterov moves black pawn from a6 to b5. Black takes white pawn. White moves queen from e2 to b5. White takes black pawn. Black moves knight from e5 to c4. White moves pawn from h2 to h3. Black moves knight from c4 to a5. In move 30, Loek Van Wely moves white queen from b5 to a4. Arseniy Nesterov moves black pawn from h7 to h6. White moves bishop from d3 to b1. Black moves rook from d8 to d1. Check. Loek Van Wely
    • And here’s the sequence
      e2 e4 c7 c5 g1 f3 d7 d6 d2 d4 c5 d4 f3 d4 g8 f6 b1 c3 a7 a6 c1 e3 e7 e6 f2 f4 b8 d7 d1 d2 b7 b5 f8 e7 f1 d3 c1 b1 a8 b8 g2 g3 d8 a5 b1 a1 e7 d6 d3 e4 b8 b2 e4 h7 g8 h8 h7 d3 d6 f4 e3 f4 b2 f2 h1 f1 d7 e5 d2 e2 a5 d2 c3 e2 f2 e2 f4 e3 e2 e3 f4 f5 f8 d8 a2 a4 c8 b7 a4 a5 b7 c8 a5 b6 a6 b5 e2 b5 e5 c4 h2 h3 c4 a5 b5 a4 h7 h6 d3 b1 d8 d1

       

    • I can do other things like split into white and black, but that’s pretty tricky and I don’t think it’s worth it
  • Start building networks. Here are some api possibilities

GOES

  • If the devlab is still up, work on pulling down data. Nope, the VPN is working so badly today that I can’t even load my webmail
  • Going to work on the download and transfer using my local Influx – done!

influx_copy

Complete copy of remote data on local server

  • 2:00 Meeting

Phil 6.9.20

I’ve been thinking about writing a paper about how the development of conspiracy theories resembles the mode collapse condition in GAN creation. There is some recent research in the GAN community on developing tools for detecting mode collapse (Jai & Zhao, 2019) that I think could be extended to identifying conspiracy theories and the processes that create them. Maybe for ICTAI 2020?

Studying Programming in the Neuroage: Just a Crazy Idea?

  • What we were proposing to do was simple yet ambitious. Using functional magnetic resonance imaging, we might better understand what goes on in the minds of programmers as they read and understand code.
  • The results indicated that a specific network of brain areas in the left hemisphere was used by participants to understand code, including areas related to working memory, divided attention, and reading comprehension. Surprisingly, we did not observe cognitive processes related to mathematical and logical reasoning, which would be consistent with the perspective that programming is a formal, logical, and mathematical process.

GPT-2

  • Start storing data!

db

  • Try new probes – done!
    probe_list = ['The game begins as ', 'In move 10', 'In move 20', 'In move 30', 'In move 40', 'White takes black ', 'Black takes white ', 'Check. ']
  • Added the raw text to the table_moves instead of comments. Here’s the new and improved:

db

  • Start thinking about metrics, like adjacency matrices by piece?
  • Build gensim embeddings of games and pieces
  • Build agent network graphs by piece type and from/to

GOES

  • Nothing back from Vadim, so I think I’ll continue on my Wasserstein Loss algorithm
  • Hey! The DevLab Influx is reachable again!

influx

  • Going to take advantage of this and find more mnemonics, Done!
  • Now I need to figure out a good way to pull down the data and load it on my system
  • Working on DevLabInfluxQuery class that extends InfluxQuery. Done, but fails with one of these exceptions:
    • Unable to parse CSV response. FluxTable definition was not found.
    • Connection aborted.’, TimeoutError(10060, ‘A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond’, None, 10060, None)
    • Tried upgrading my influxdb-client (pip install –upgrade influx-client) from 1.5 to 1.9. Hopefully i didn’t break too much. Didn’t fix it with that, but I got the port number for Boris, which fixed things

devlabInflux

ML seminar

  • Discussed options for visualizing relationships between nodes. Starting with just straightforward plotting of connected nodes. Also, plotting by piece. Another option is to use the gensim library to do a word2vec embedding and visualize it. I think I’ll start there because I’m curious.
  • To improve the embedding, it might be useful to generate entire games and parse them.

 

Phil 6.8.20

Not at all happy with this COVID weight gain. My preferred stress management tool is exercise, but I’m at a minimum of 20 miles/day. Usually about 100 miles+ on weekends.

Starting to think about writing something on the ethics of mode collapse

D20

Florida

GPT-2 Agents

  • Back to pulling move and piece information out of generated text – done
  • Added heuristic for move number
  • Created dicts for db data. Add writes tomorrow!

GOES

  • Adding read tests – done! Had to screw around with utc conversions for a while
    • Writes are roughly 1/2 sec per 1,000
    • Reads are about 2/100 sec per 1,000
  • Tried to log in and get on the devlab influx system – nope:

bad gateway

  • Trying to figure out what makes sense to do next. Ping Vadim? Done

Phil 6.7.20

I know it seems like an artifact from another time, but the map is coming along. Here’s the us, based on 14 day trend in reported deaths:

dtz_map

Neural networks learning how to talk to each other. Need to see if there are any publications:

nn_chatter

Phil 6.5.20

GPT-2 Agents

  • Started a google doc for the GPT-2 Chess agents that will be grist for the paper(s)
  • Create probes for each piece, like:
    • white moves pawn from e2 to
    • black moves pawn from e7 to
    • A slightly more sophisticated parser will need to work with “The game begins
  • I can take the results of multiple probes and store them in the table_moves, then run statistics by color, piece, etc
  • Then see if it’s possible to connect one piece to another piece using a “from/to chain” across multiple pieces. There will probably be some sort of distribution where the median(?) value should be a set of adjacent squares.
  • The connections can be tested by building adjacency matrices by piece and by move number range
  • Started ChessMovesToDb. Might as well work on the tricky parse of  “The game begins “. Making progress. My initial thought on how to parse moves doesn’t handle weird openings like “4.e3, Gligoric system with 7…dc” Need to strip to the the first occurance of “move”, I think:
    white uses the Nimzo-Indian opening. and black countering with 4.e3, Gligoric system with 7...dc. White moves pawn from d2 to d4. Black moves knight from g8 to f

GOES

  • More timing tests
    • Add explicit time to the write
    • This should also be the basis of the system that will pull data from the DevLab Influx
  • Continue search for important mnemonics during yaw flip – nope, still can’t log into DevLab influx
  • Working on setting up arbitrary times spans and a new bucket for tests. Writing data works. Now I need to extend the number of series, tags, etc influxAll the writes are working. I’ll do the reads Monday. Everything looks pretty consistent, though.

timing

Phil 6.4.20

GPT-2 Agents

  • Thinking about how to parse the data to build maps.
    • Clearly, there are black and white agents (the players). Option 1 would be to simply collect the from-to points at the player level
    • One step down would be the piece family, pawns, rooks, bishops. The king and queen would be single instances
    • The most granular would be to track the individual pieces.
    • The issue I’m struggling with is when pieces pass over squares rather than through them. There is no explicit d3 when white moves the pawn from d2 to d4. Trying to think of the best way to uncover the latent information.
    • I think a good way to procrastinate about this problem is to parse the games into a database of moves. The format is always “<player/color> moves <(color)piece> from <start> to <end>. There is additional information as well (game, players, move number), but that could be added later.
    • Done
    • table_moves
    • Sent a note to Thomas Wolf at Huggingface
    • I know what I’m going to do!
      • Create probes for each piece, like:
        • white moves pawn from e2 to
        • black moves pawn from e7 to
        • A slightly more sophisticated parser will need to work with “The game begins
      • I can take the results of multiple probes and store them in the table_moves, then run statistics by color, piece, etc
      • Then see if it’s possible to connect one piece to another piece using a “from/to chain” across multiple pieces. There will probably be some sort of distribution where the median(?) value should be a set of adjacent squares.
      • The connections can be tested by building adjacency matrices by piece and by move number range

GOES

  • Hey! The VPN is much more responsive today! Logged into Influx
  • Getting the right time for the query from here
    • start: 2020-04-06 15:30:00.000
    • end: 2020-04-06 18:00:00.000
    • Need to get the right mnemonics. Pinged Bruce
  • Starting some timing tests on my local influx copy
  • The VPN has stopped again
  • 2:00 NSOF Meeting – Nice demo by Jason
  • 3:30 AIMS IRAD
    • Status. John is going part time
    • Railed against the poor VPN access to the DevLab

ML Brownbag – Aaron did a nice job

Phil 6.3.20

MarthaRaddatz

“When the students poured into Tiananmen Square, the Chinese government almost blew it. Then they were vicious, they were horrible, but they put it down with strength. That shows you the power of strength”Donald Trump, 1990

GPT-2 Agents

  • Finished finetuning the model yesterday, and tried running it with the following seeds:
    text_list = ['The game begins as ',
                 'White moves ',
                 'Black moves ',
                 'In move 1, ',
                 'In move 40, ']
  • The results are pretty incredible:

generated_chess

  • Opening moves (“The game begins as”, “In move 1”) make sense. White always moves first. Pieces move in reasonable, permissible ways (e.g d3 to d4)
  • The model knows how to take pieces correctly. The red lines connect moves by White and the counter-move by black, then the taking of the piece.
  • Moves that occur later in the game are also sensible. (“In move 40”, “White moves”. “Black moves”)
  • Names occur very infrequently. And Loek is probably the most frequent name, since his entire career is in a pgn file
  • I think the next steps are:
    • Create marker text for SBoW analysis, like  “In move 1, White moves pawn from d2 to d4. Black moves knight from g8 to f6.“, and “In move 40, White moves rook from d1 to d7. Black moves pawn from e5 to e4.” so that I can use the antibubble analytics
    • Create a parser that looks for the movements of particular pieces (white pawns, black knights, etc) and see if I can build a map from that using the agent tools. Pawns and kings may provide the projection, while other pieces move over that
  • Good results today!!!

GOES

  • Create Github repo for brownbag – done
  • Status report – done
  • Ping Biruh about accessing the on-site InfluxDB – done
  • Ping Boris and Bruce for mnemonics and yaw flip times
  • 2:00 meeting
  • Build a set of stress tests for influx.
    • samples 300 x 7,000 floating point
    • tags: benchmark tags 2, 4, 8, 16, 32, etc
    • Vadim for Cassie DB questions

#COVID

  • Meeting cancelled

Phil 6.2.20

Military

Remember when all we had to worry about dealing with a pandemic? Good times.

GPT-2 Agents

  • Downloaded a lot of PGN files. Looks like I could pull down the entire archive here: theweekinchess.com/twic. Need to write a script that pulls down the files and unzips them
  • Need to scan the directory and parse each pgn file – done
  • Created train (700,000 lines and eval 100,000 lines) files
  • Feed into GPT-2! Seems to be cranking along:

chessGPT

GOES

  • Submitted paper and slide deck
  • Putting together a brown-bag style presentation for the development of the GAN code
  • Ping Vadim to see what to do next?

ML seminar

  • Presented brown-bag talk
  • Need to share slides and put code on GitHub

 

Phil 6.1.20

century

It’s all been a bit much recently, so yesterday I took advantage of the wonderful weather and went on a long ride with a few friends.

D20 – Nagged Zach with this image. He responses generally were “It is generally pretty optimistic around here”, and “According to google is is getting better. I wonder where their data comes from”.

Colorado

GPT-2 Agents

  • Still some debugging. added output of the raw move files to find games better
  • Dates aren’t right either – fixed
  • Added some better triggering of the print_board method
  • WOW! I mean it shouldn’t be that surprising, but the pgn is wrong. Going to add a flag for games with problem moves. Then I think I should be able to generate text.

GOES

  • Put paper in the right format (word?)
  • Create the slides. Verify the speaking duration – done. It’s 20 minuts, I think probably 15 for talk and 5 for questions
  • Found the technical paper repo. Looks like I didn’t have to worry about length! http://gvsets.ndia-mich.org/publications.php#MSTV
  • Uploaded! Just use the info in the email from GVSETS Tech Session Admin

Google is profiting from dozens of websites that peddle hoaxes and conspiracy theories about Covid-19, according to a Tech Transparency Project (TTP) investigation, revealing a major hole in the company’s claims that it’s fighting misinformation about the pandemic.

Google

Phil 5.29.20

WaPo

Race/police riots. We’re not even halfway through the year

GPT-2 Agents

  • Wrote my first pgn code! It moves the rooks out, down to the other side of the board and then back
    1. h4 h5 2. Rh3 Rh6 3. Ra3 Ra6 4. Rh3 Rh6 5. 1/2-1/2
  • And it’s working! I did have to adjust the subtraction order to get pieces to move in the right direction. It even generates game text:
    Game.parse_moves(): Move1 = ' h4 h5 '
    Evaluating move [h4 h5]
    piece string = '' (blank is pawn)
    piece string = '' (blank is pawn)
    Game.parse_moves(): Move2 = ' Rh3 Rh6 '
    Evaluating move [Rh3 Rh6]
    piece string = 'R' (blank is pawn)
    piece string = 'R' (blank is pawn)
    Game.parse_moves(): Move3 = ' Ra3 Ra6 '
    Evaluating move [Ra3 Ra6]
    piece string = 'R' (blank is pawn)
    Chessboard.check_if_clear() white rook is blocked by white pawn
    piece string = 'R' (blank is pawn)
    Chessboard.check_if_clear() black rook is blocked by black pawn
    Game.parse_moves(): Move4 = ' Rh3 Rh6 '
    Evaluating move [Rh3 Rh6]
    piece string = 'R' (blank is pawn)
    piece string = 'R' (blank is pawn)
    Game.parse_moves(): Move5 = ' 1/2-1/2'
    Evaluating move [1/2-1/2]
    
    The game begins as white uses the Polish (Sokolsky) opening opening. 
    Aye Bee moves white pawn from h2 to h4. Black moves pawn from h7 to h5.
    White moves rook from h1 to h3. Cee Dee moves black rook from h8 to h6.
    White moves rook from h3 to a3. Black moves rook from h6 to a6.
    Aye Bee moves white rook from a3 to h3. Black moves rook from a6 to h6.
    In move 5, Aye Bee declares a draw. Cee Dee declares a draw
    
  • Ok, it’s not quite working. I can’t take pieces any more. Fixed. Here’s the longer sequence where the white rook takes the black rook and retreats back to its start, moving past the other white rook:
    1. h4 h5 2. Rh3 Rh6 3. Ra3 Ra6 4. Rxa6 c6 5. Ra3 d6 6. Rh3 e6 7. Rh1 f6 8. 1/2-1/2
  • And here’s the expanded game:
    The game begins as white uses the Polish (Sokolsky) opening opening. 
    White moves pawn from h2 to h4. Black moves pawn from h7 to h5.
    Aye Bee moves white rook from h1 to h3. Black moves rook from h8 to h6.
    In move 3, White moves rook from h3 to a3. Black moves rook from h6 to a6.
    White moves rook from a3 to a6. White takes black rook. Black moves pawn from c7 to c6.
    In move 5, White moves rook from a6 to a3. Black moves pawn from d7 to d6.
    White moves rook from a3 to h3. Black moves pawn from e7 to e6.
    In move 7, White moves rook from h3 to h1. Black moves pawn from f7 to f6.
    In move 8, Aye Bee declares a draw. Cee Dee declares a draw
  • It’s a little too Friday to try a full run and find new bugs though. Going to work on the paper instead

D20 – Ping Zach. Made contact. Asked for a date to have the maps up or cut bait.

GOES

  • Add accuracy/loss diagram and paragraph – done
  • Finish first pass
  • Nope, All hands plus 90 minutes of required cybersecurity training. To be fair, the videos were nicely done, with a light touch and good acting.

Phil 5.28.20

GPT-2 Agents

  • Back to bug hunting today’s job is to figure out why this:
    1. Nf3 Nf6 2. g3 c5 3. Bg2 Nc6 4. O-O e5 5. e4 Nxe4 6. Re1 Nf6 7. Nxe5 Be7 8. c4
    O-O 9. Nc3 Nxe5 10. Rxe5 d6 11. Re1 Be6 12. Bxb7 Rb8 13. Bg2 Bxc4 14. d4 Be6 15.
    b3 Rb4 16. dxc5 dxc5 17. Qxd8 Rxd8 18. Ba3 Rbb8 19. Na4 Rdc8 20. Rac1 Nd7 21.
    Bd5 Bxd5 22. Rxe7 Bc6 23. Nxc5 Nxc5 24. Rxc5 a6 25. f4 h6 26. Kf2 Bb5 27. Ke3
    Rd8 28. Rcc7 Rd3+ 29. Ke4 Rd2 30. Rxf7 Re8+ 31. Kf5 Bd3+ 32. Kg4 Rxh2 33. Rxg7+
    Kh8 34. Bd6 Rf2 35. Bc5 Rd2 36. Bb4 Rc2 37. Rxc2 Kxg7 38. Rc7+ Kg6 39. Rc6+ Kf7
    40. Rxh6 Re2 41. Rd6 Re3 42. Kh4 Be2 43. g4 Rf3 44. Rd4 Rf2 45. Kg5 1-0
  • breaks the system.
  • So I never added logic to see if the path was clear for a move. The game has a move where white rook moves from e1 to e5 and then back. For the move back, the system looks for the closest rook, which is actually at a1, as the search algorithm works. But that way is actually blocked by the white bishop and white queen. It should take the clear path and discard blocked paths. I think this fix is pretty straightforward

chess

  • Wrote the test, but I’m not sure if it’s right. We’ll test tomorrow:
        def check_if_clear(self, loc:Tuple, candidate:Tuple, piece:PIECES) -> bool:
            if piece == PIECES.WHITE_PAWN or piece == PIECES.BLACK_PAWN:
                return True
            if piece == PIECES.WHITE_KNIGHT or piece == PIECES.BLACK_KNIGHT:
                return True
            if piece == PIECES.WHITE_KING or piece == PIECES.BLACK_KING:
                return True
            
            c_col_i = self.char_index.index(candidate[0])
            c_row_i = self.num_index.index(candidate[1])
            l_col_i = self.char_index.index(loc[0])
            l_row_i = self.num_index.index(loc[1])
            col_dist = l_col_i - c_col_i
            row_dist = l_row_i - c_row_i
            dist = max(abs(col_dist), abs(row_dist))
            col_vec = 0
            row_vec = 0
            if col_dist != 0:
                col_vec = col_dist/dist
            if row_dist != 0:
                row_vec = row_dist/dist
    
            col_i = l_col_i
            row_i = l_row_i
            for i in range(dist):
                num = self.num_index[row_i]
                char = self.char_index[col_i]
                pos = (char, num)
                p = self.get_piece_at(pos)
                if p != PIECES.NONE:
                    return False
                col_i += col_vec
                row_i += row_vec
    
            return True

     

GOES

  • More paper writing
    • Finished the first pass of section 2, which describes the whole model.

Phil 5.27.20

Drop off the truck today!

Agents and expensive information

  • Antonio sent a note asking if I’d be interested in contributing to a chapter. Sent him this response:
    • There is something that I’d like to explore that might fit. It’s the idea that in most environments, agents (animal, human, machine, etc.) are incentivized to cheat. I think this is because information is expensive to produce, but essentially free to copy. The problem is that if all the agents cheat, then the system will collapse because the agents become decoupled from reality (what I call a stampede). So the system as a whole is incentivized to somehow restrict cheating.
    • I think this could be very interesting to work through, but I don’t have a model (or even an approach really) developed that would describe it. I think that this might be related to game theory, though I haven’t found much in the literature.

GPT-2 Agents

  • Working on building a text corpora. Going to add a search for “Opening” and “Variation” which I’ll try before using the DB version – done
  • Having some problem that starts after a few games. Found the culprit game. Will work on tomorrow. It might be tied to a linefeed?

GOES

  • Working on the GVSETS paper and slide deck

Phil 5.26.20

Had a good, cathartic ride yesterday:

GPT-2 Agents

  • I’ve been working on the PGNtoEnglish class and was having an odd bug where occasionally a piece would pull a piece from the other side of the board. Since it was intermittent, it required many print statements and searching through the logs for “black knight”

blac knight

  • My problem was in forgetting how Python indexes into arrays. Here’s the code in question:

python

  • When I first wrote this, I had to deal with a lot of potential coordinates that were off the board, with indexes like (-2, -1), or (10, 8) for an 8×8 board. I thought to handle this with a try/except on IndexError (the bottom highlight). In other languages this would have worked, but Python allows negative indexes. Ooops! Adding the test for either index being negative (the top highlight) fixed that bug

D20

  • Ping Zach – done

GOES

  • Write up code review thoughts for Erik -done
  • Add n_critic to base class, along with adjustable false flag value
    • First, making sure that everything still works. Seems to.
    • Here’s the best I can do today, using the OneDGAN2a class with an RMSProp(lr=0.0005)

epochsNoise_trainedacc_loss

  • Assemble all the bits for an example
    • Verified that the InfluxTestTrainBase still works, and it’s using the InfluxDB values
    • Assemble all the bits for an example
      • Created a NoiseGAN2 with the same amount of points as the InfluxTestTrainBase model – done. Looks real good on the noise, too:

epochsNoise_trainedacc_loss

  • How to trim the columns on a 2D Numpy array:
    results = self.ifq.run_query(self.bucket, begin, end, filter_str)
    results = self.ifq.to_nd_array(results)
    results = np.delete(results, slice(clamp, None), 1)
    predict_table = model.predict(results)
  • Here’s all the parts nailed together:
  • Start the paper and the deck

ML Group

  • Need to create a walkthrough of coding practices for next week. I think I’ll use the trajectory of the GAN coding as the basis

 

Phil 5.25.20

GPT-2 Agents

  • Work on openings
  • Maybe create database that contains games as collections of moves. A query could produce the text for the language model
  • Created a database for openings, since there are multiple versions of the same opening and I couldn’t just use the site as an index into a dict. I mean…

openings

  • Chasing down more bugs. Did you know that ‘#’ means checkmate as well as ‘++’? Now you do!

D20

  • Rework the offsets to a y-day linear model rather than an x-y day linear model

Book

  • Semester’s over, so ping Thom – done