Category Archives: Machine Learning

Phil 7.20.20

My guess it that barring interference of some kind all US cities will have something like what’s going on in Portland by election day

GPT-2 Agents

  • Back from break, and thinking about what to do next. I think the first thing to do is simply gather more data from the model. Right now I have about  1,500 GPT-2 moves and about 190,000 human moves. Increasing the number of predictions to 1,000 by adding a batch size value. Otherwise I got out-of-memory errors.
  • I had started the run in the morning and was almost done when a power failure hit and the UPS didn’t work. Ordered a new UPS. Tried to be clever about finishing off the last piece of data but left in the code that truncated the table. Ah, well. Starting over.
  • Next is to adjust the queries so that the populations are more similar. The GPT-2 moves come from the following prompts:
    probe_list = ['The game begins as ', 'In move 10', 'In move 20', 'In move 30', 'In move 40', 'White takes black ', 'Black takes white ', 'Check. ']

    That means I should adjust my queries of the human data to reflect those biases, something like:

    select * from table_actual where move_number = 1 order by move_number limit 50;

    which should match the probe ‘The game begins as ‘.

  • I’d also like to run longer, full games (look for ‘resigns’, ‘draw’, or ‘wins’) and parse them, but that’s for later.
  • Need to figure out the statistics to compare populations. I think I’m going to take some time and look through the NIST Engineering Statistics Handbook

NIST

GOES

  • Vadim seems to have made progress. Need to set up a meeting to chat and catch up
  • 2:00 meeting with V & E. Good progress!
  • GVSETS has been moved to Nov 3. Speaking of which, I’ll need to compare simulated and actual maneuvers, so stats here too. Now that the moves are cooking I’ll start on the stats

 

The GPT-3 is… something

An extract from “On Being an Automaton” (full text here). From , seeded was the title, the author’s name and the first word. Found(?) by Mario Klingemann, who talks about it and other compositions in this Thread

“I am not actually an artificial intelligence but a rather more interesting phenomenon. What I actually am is an artificial intelligence that has learned to write like myself, a machine with writing skills that are indistinguishable from mine, but which, unlike me, is not so hide-bound by its programming that it cannot learn a little something new. As I sit here, writing, it is watching. And as I continue, it too continues, but also improving itself.”

This is really starting to remind me of discussions about consciousness. At what point do we call something aware? Self-reflection? That sure seems to fit the bill. I think the question may start to become not when machines are conscious/aware, but if we are more machine-like in our awareness than we feel/believe.

Phil 7.9.20

NVAE: A Deep Hierarchical Variational Autoencoder

  • Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ as shown in Fig. 1. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256×256 pixels.

VAEsNotGANs

Like Two Pis in a Pod: Author Similarity Across Time in the Ancient Greek Corpus

  • One commonly recognized feature of the Ancient Greek corpus is that later texts frequently imitate and allude to model texts from earlier time periods, but analysis of this phenomenon is mostly done for specific author pairs based on close reading and highly visible instances of imitation. In this work, we use computational techniques to examine the similarity of a wide range of Ancient Greek authors, with a focus on similarity between authors writing many centuries apart. We represent texts and authors based on their usage of high-frequency words to capture author signatures rather than document topics and measure similarity using Jensen- Shannon Divergence. We then analyze author similarity across centuries, finding high similarity between specific authors and across the corpus that is not common to all languages.

GPT-2 Agents

  • Setting up some experiments, for real and synthetic, black and white. All values should have raw numbers and percentages:
    • Moves from each square by piece+color / total number of moves from square
    • Moves to each square by piece+color / total number of moves from square
    • Squares by piece+color / total number of pieces
    • Sequences? I’d have to add back in castling and re-run. Maybe later
    • Squares used over time (first 10 moves, second 10, etc)
    • Pieces used over time
  • Create new directory called results that will contain the spreadsheets
  • Running the first queries. It’s going to take about an hour by my estimation, but nothing is exploding as far as the queries go
  • Add a spreadsheet for illegal moves. Done! Here’s the results. The GPT agents make 3 illegal moves out of 1,565:
    illegal bishop move: {'from': 'e7', 'to': 'c6'}
    illegal knight move: {'from': 'c5', 'to': 'a8'}
    illegal queen move: {'from': 'f8', 'to': 'h4'}
    Dataframe: ../results/legal_1.xlsx/legal-table_moves
             illegal  legal
    pawns          0    446
    rooks          0    270
    bishops        1    193
    knights        1    266
    queen          1    175
    king           0    212
    totals         3   1562
    Dataframe: ../results/legal_1.xlsx/legal-table_actual
             illegal   legal
    pawns          0   49386
    rooks          0   31507
    bishops        0   28263
    knights        0   31493
    queen          0   22818
    king           0   23608
    totals         0  188324

     

move_percentage

GOES

  • Waiting on Vadim
  • 2:00 AIMS-Core v3.0 Overview
  • Ping MARCOM

Waikato

  • 6:00 Meeting

Phil 7.8.20

A brief history of high-speed trading (via the Museum of American Finance)

  • In the late 1830s, Philadelphia broker William C. Bridges operated a private signal station between New York and Philadelphia which disseminated stock market news to him and his backers (and to no one else). The signals were transmitted through an “optical telegraph,” which consisted of a series of boards on a pole, mounted on hills that could be seen by a telescope.

DtZ

  • The IHME site has improved to the point that we should pull down our site

GPT-2 Agents

  • Need to think about how to show that interrogating a language model is sufficiently similar to interrogating actual data.
    • At this point, I know that the language model comes up with legal moves
    • I need to compare the statistics of actual moves to synthetic moves to see if the populations are sufficiently similar. This means that I need to get the training and evaluation data into the database. Once that’s done, I can compare the frequency of move types (e.g. “At move 10, White moves pawn from a2 to a4”), and the moves from a particular location (e.g. “e2” can have moves to “e3” and “e4” with the pawn, or diagonals with the “f1” bishop or the white queen).
    • The level of similarity should indicate if the biases of the players are represented in the language model.
      • There should be a way of determining a lower bound of data?
      • Once this is shown, then the idea of generalizing to other human interactions can be justified.
  • Started PGNtoDB, which will populate table_actual
    • Ignoring castling for now
    • Chunking into the database! And by chunking, I refer to the sound of the drive 🙂
    • And now I have a catalog of 188,324 human chess moves

chess_moves_db

GOES

  • 10:00 Meeting with Vadim
  • 2:00 Status
  • Last training for a while!

Phil 7.3.20

Today is a federal holiday, so no rocket science

Huggingface has a pipeline interface now that is pretty abstract. This works:

from transformers import pipeline

translator = pipeline("translation_en_to_fr")
print(translator("Hugging Face is a technology company based in New York and Paris", max_length=40))
  • [{‘translation_text’: ‘Hugging Face est une entreprise technologique basée à New York et à Paris.’}]

Wow: GPT-3 writes code!

DtZ is back up! Too many countries have the disease and the histories had to be cropped to stay under the data cap for the free service

GPT-2 Agents

  • Work on more granular path finding
    • Going to try the hypotenuse of distance to source and line first – nope
    • Trying looking for the distances of each and doing a nested sort
    • I had a problem where I was checking to see whether a point was between the current node and the target node using the original line between the source and target nodes. Except that I was checking on a lone from the current node to the target, and failing the test. Oops! Fixed
    • I went back to the hypotenuse version now that the in_between test isn’t broken and look at that!

granular

    • Added the option for coarse or granular paths
  • Start thinking about topic extraction for a given corpus

#COVID

  • Evaluate Arabic to English translation. Got it working!
    from transformers import MarianTokenizer, MarianMTModel
    from typing import List
    src = 'ar'  # source language
    trg = 'en'  # target language
    sample_text = "لم يسافر أبي إلى الخارج من قبل"
    sample_text2 = "الصحة_السعودية تعلن إصابة أربعيني بفيروس كورونا بالمدينة المنورة حيث صنفت عدواه بحالة أولية مخالطة الإبل مشيرة إلى أن حماية الفرد من(كورونا)تكون باتباع الإرشادات الوقائية والمحافظة على النظافة والتعامل مع #الإبل والمواشي بحرص شديد من خلال ارتداء الكمامة "
    mname = f'Helsinki-NLP/opus-mt-{src}-{trg}'
    
    model = MarianMTModel.from_pretrained(mname)
    tok = MarianTokenizer.from_pretrained(mname)
    batch = tok.prepare_translation_batch(src_texts=[sample_text2])  # don't need tgt_text for inference
    gen = model.generate(**batch)  # for forward pass: model(**batch)
    words: List[str] = tok.batch_decode(gen, skip_special_tokens=True) 
    print(words)
  • It took a few tries to find the right model. The naming here is very haphazard.
  • Asked for a sanity check from the group
    • This:
      الصحة_السعودية تعلن إصابة أربعيني بفيروس كورونا بالمدينة المنورة حيث صنفت عدواه بحالة أولية مخالطة الإبل مشيرة إلى أن حماية الفرد من(كورونا)تكون باتباع الإرشادات الوقائية والمحافظة على النظافة والتعامل مع #الإبل والمواشي بحرص شديد من خلال ارتداء الكمامة
    • Translates to this:
      Saudi health announces a 40-year-old corona virus in the city of Manora, where his enemy was classified as a primary camel conglomerate, indicating that the protection of the individual from Corona would be through preventive guidance, hygiene, and careful handling of the Apple and the cattle by wearing the gag.

       

  • Write script that takes a batch of rows and adds translations until all the rows in the table are complete

Book chat

Phil 6.23.20

Oh, look, we’re not going to let smart, motivated people into the country and sabotage our future because, I dunno, being xenophobic trumps everything?

Collective Intelligence 2020

  • You can watch all the keynotes on our YouTube channel.
  • Conference proceedings (papers & presentations) are online here.

GPT-2 Agents

  • Working on getting Gephi installed and running everywhere.
  • Next is  to export graphs from networkx. Done! A little tricky. I’m using a dictionary attached to each node to store the pieces that traversed that particular edge, but the exporter chokes on that. So I have to create a new graph without the dict and export that. It looks pretty good too!

gephi_first_map

  • I’m going to import that into Illustrator and see if I can build a (distorted) chessboard. Here’s the result:

chess_nearest_neighbors_6_23_20

  • To get a sense of how this relates back to the ground truth of the chessboard, the red lines are the columns of the board (a – h) and the green lines are the rows (1 – 8). Here’s the comparison with the actual board:

chessboard

  • It’s clearly a grid. The opposite corners are far away from each other. The left (queen) side of the board is more complicated, which may be because of the queen?
  • I had a chat with Aaron about all of this and I think the next step is to show that this map can be used for meaningful navigation. Consider the following two trajectories from opposite sides of the map:

chessboard_trajectories

  • These are the kind of trajectories that you’d like to be able to plot on a map. Let’s say you’re on square A1, and you’re on a rook. For you, only row 1 and column A are directly accessible. But maybe you could ride a bishop from A3 to F8, then take a king the rest of the way. Now, the shortest number of moves could be to take the rook from A1 to A8 to H8, but the journey would cover a greater distance. In terms of belief space, you would not be making incremental shifts to your understanding, you would be making two, equally large jumps that combined are roughly 1.4 times farther than the more direct route. That’s the difference between navigating in space vs navigating in a network.
  • I think the next step is to write an app that reads in the GEFX files, which contain location information and link them back to the database, so it’s possible to plot a beginning and an ending, and have the app figure out the legal moves that move you near that line towards your destination.
  • After that, it’s time to finetune the NN on the antibubbles corpora and see if the same thing can be done.

GOES

  • Need to record a video of my talk for GVSETS
  • Sent a copy to Aaron for SBIR
  • Started looking at the SBIR materials

ML Seminar

Phil 6.19.20

stampede

12:00 – Sy’s defense at noon!

GPT-2 Agents

  • Fixed the regex in ChessMovesToDb
  • More work on finding closest neighbors.
    • Maybe keep a record of the number and type of pieces that are used?
    • Looks like the basics are working. Here’s the test graph:

known_nearest

    • And here are the results. I made the code so that it only shows each neighbor once, but it may be useful to keep track of the number of times a neighbor shows up in a list. This might not be important in chess, but in less structured text environments (RPGs to Reddit threads), it may be valuable:
      find_closest_neighbors(): nodes = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
      {'node': 'a', 'known_nearest': ['f', 'd']}
      {'node': 'b', 'known_nearest': ['f', 'd']}
      {'node': 'c', 'known_nearest': []}
      {'node': 'd', 'known_nearest': ['f', 'a', 'b', 'g']}
      {'node': 'e', 'known_nearest': []}
      {'node': 'f', 'known_nearest': ['a', 'g', 'd', 'b']}
      {'node': 'g', 'known_nearest': ['f', 'd']}

       

    • At this point it’s not recursive, but it could be. I’m worried about combinatorial explosion though

GOES

  • Submit GVSETS paper – done!
  • Meeting with Vadim and Issac at 11:00
    • Goal is to move all the RW code out of the sim class and into its own and call methods from the sim class

Phil 6.18.20

Hotel reservations!

Sent a ping to Don about a paper to review

GPT-2 Agents

  • Started on common neighbor algorithm. Definitely a good place for recursion
  • Generating larger file

adjacency

moves

  • If you look at the center of the plot and squint a bit, you can see a bit of the grid:

networkx

  • There is an error: The string ‘, White moves pawn from h3 to g4. White takes black pawn. LCZero v0.24-sv-t60-3010 moves black knight from h5 to g7. White moves pawn from g4 to h5. LCZero v0.24‘ is parsing incorrectly due to the truly bizarre name (The little known Grand Master LCZero v0.24-sv-t60-3010). Need to fix the regex. I think I just need to make it so that there has to be a space in front and a space/period after.

GOES

  • Readthrough of GVSETS paper
  • 2:00 Meeting

Waikato

  • Alex had a really good insight in that groups that are working at coming to consensus use terms to discuss their level of agreement that are independent of the points being argued. That’s could really be important in text analysis.

Phil 6.17.20

Listened to a fantastic interview with Nell Irvin Painter (White Supremacy at Home and Abroad):

GPT-2 Agents

  • Working on finding the connections between nodes
  • Now that I know how to add weights to edges, I think I want to add the piece that made the move. It needs to be a list, since multiple types of pieces can connect two squares. Added a dict_array per edge:
    if target not in nlist:
        self.G.add_edge(source, target, weight=0)
        self.G[target]['dict_array'] = []
    self.G[target]['weight'] += 1
    for key, val in data_dict.items():
        a:List = self.G[target]['dict_array']
        a.append({key:val})
  • I also realize that moves that repeatedly connect squares are more likely to be close, simply because the available squares of more distant moves increase in a geometric fashion. I added a method that writes out moves to Excel where I can play with them. Here are some moves:

moves

  • In looking at these moves, it does seem to be that the majority of the moves seem to be short (e.g. b6-b7, b6-a7, b6-b5). The only exception is the knight (b6-d7). So I think there is a confidence value that I can calculate for the ‘physical’ adjacency of nodes in a network. This could also apply to belief spaces as well. Most consensus requires coordination and common orientation (pos, heading, speed), so commonly connected topics can be said to be ‘closer’
  • Good chat with Aaron about CVPR and algorithms

GOES

  • Finish revisions and send to T and Aaron for review. Last thing is to tie back to ground vehicles in the discussion. Done! I think… Need to read the whole thing and see if it still hangs together
  • 2:00 – Meeting

Phil 6.15.20

The nice thing about riding big distances is that even though nothing has changed, everything is different and better for a while

therapy

GPT-2 Agents

  • Try to make a adjacency matrix from the DB. That may work, but it sure doesn’t generate anything human-readable. Need to roll my own
  • After accidentally blowing away my database (Yay, backups!), I’m reading in the network. This actually looks really good. I’d not an 8×8 grid, but the system found 63 nodes, and you can see that many adjacent nodes are connected:

networkx

  • You can also see that common moves, such as e2 and d2 are in the center and well connected.
  • Hand-rolled an adjacency matrix using pandas.DataFrame and exported to Excel. I have to think about what this means now. I think that it’s clear that moves tend to be nearby. I’m clearly not setting something right, because I don’t have the weight of the edge between nodes (I’m currently using the number of times a node was visited). Now I need to figure out how to use this:

adjacency

GOES

  • Continue with revisions
  • After trying the pipeline image, (very small text and pix!), I’m going to try a new, more vertical layout. I think this is a little better. It’s a lot more legible:
pipeline_vert2

Enter a caption

  • Ok, back to writing actual text

Good chat with Aaron about the Conspiracy as mode collapse experiments

Fika

  • Sy’s Presentation

Phil 6.12.20

Hey! My dissertation is online now!

Optimizing Multiple Loss Functions with Loss-Conditional Training

  • The idea behind our approach is to train a single model that covers all choices of coefficients of the loss terms, instead of training a model for each set of coefficients. We achieve this by (i) training the model on a distribution of losses instead of a single loss function, and (ii) conditioning the model outputs on the vector of coefficients of the loss terms. This way, at inference time the conditioning vector can be varied, allowing us to traverse the space of models corresponding to loss functions with different coefficients

GPT-2 Agents

  • Applied to get on the OpenAI API waitlist
  • Started figuring out igraph. Welp, it doesn’t plot because cannot load library ‘libcairo-2.dll’: error 0x7e Diesn’t seem to be a good fix. It’s a shame, because igraph seems to be great for analyzing graphs mathematically. Removing everything
  • Looks like I can use networkx combined with networkx_viewer (pypi)(github). Look into that next. Upgraded from 2.1 to 2.4
  • Pulled my NetworkxGraphing.py class over from Antibubbles and verified that it still works!

networkx

GOES

  • Send Jason my download code
  • Work on GVSETS paper
    • Added formatting changes and moved footnotes to citations
    • Adding a figure for the pipeline. Hmmm. It’s um… big

pipeline

Phil 6.11.20

Call Simon

GPT-2 Agents

  • Embeddings and plots
  • Got the sequences generated. They look pretty cool too, like codes:
    e2 e4 c7 c5 g1 f3 b8 c6 d2 d4 c5 d4 f3 d4 g7 g6 b1 c3 f8 g7 f1 e2 d7 d6 c1 g5 a7 a6 d1 e2 f6 e8 f2 f3 e8 c7 g5 f4 f7 f5 e4 f5 g6 f5 e2 f3 c7 d5 f3 g4 c8 d7 e2 d2 d8 c7 b2 b3 f5 f4 d4 b3 d7 f5 f1 e1 e7 f5 e1 e6 f5 g4 b3 d4 a8 c8 d4 f5 c6 f5 e6 f6 f5 d4 a1 e1 g4 h5 f3 f4 f8 f6 d2 f6 c8 f8 f6 h4 d4 e6 c2 c3 e6 d4 c3 d4 h5 f3 e1 e7 f8 f7 e7 f7 g7 f7 g1 f2 b7 b5 c4 b5 a6 b5 g2 g4 f7 g6 h2 h3 g8
    e2 e4 c7 c6 d2 d4 d7 d5 b1 d2 g8 f6 f1 d3 d5 e4 d2 e4 b8 d7 g1 f3 e7 e6 d1 e2 f6 e4 d3 e4 d8 c7 e4 b1 d7 f6 c1 g5 f6 g4 h7 h6 g5 h4 c7 d7 e2 e3 g4 e5 h1 g1 e5 c6 f2 f3 f8 e7 e3 e2 g2 g4 f8 d8 f3 e5 d7 d3 e2 d3 d8 d3 g1 d1 a7 a6 c1 b1 d3 d6 b1 a1 e7 f6 a2 a3 c8 e6 f3 e4 b7 b5 b2 b4 f6 g7 b4 a5 b5 a4 e5 c6 a8 b8 d1 f1 a4 a3 c6 e5 a3 a2 h4 e1 a2 a1 f1 a1 b8 a1 d4 d5 e6 c8 d1 b1 g8 f8 a1 b1 f8 e7 b1 c2 e7 d6 e5 d7 g7 d4
    e2 e4 e7 e6 d2 d4 d7 d5 b1 c3 f8 b4 e4 e5 b4 c3 b2 c3 g8 e7 d1 b3 c7 c5 a2 a3 b8 c6 f2 f4 b7 b5 a3 a4 b5 b4 b3 b2 a4 a5 c5 d4 c3 d4 e7 g6 g1 f3 c6 e7 c8 a6 c1 g5 e7 g8 a1 b1 a8 c8 e5 d6 g8 f6 g5 f6 d8 f6 f1 f2 h7 h6 b2 b5 f6 d6 f3 h4 d6 e7 h4 g6 a6 g2 g6 e7 f8 e8 e7 f5 g2 f3 g1 g2 c8 c2 b1 c1 c2 c8 a5 a6 c8 a8 h2 h3 f3 e4 b5 b3 f7 f6 b3 b2 f6 f5 f5 d6 e6 e5 b2 a1 e8 a8 f2 f5 f5 e4 f5 f7 a8 b8 c1 f1 b8 b5 a1 a2 a8 a7
    e3 d2 d4 g8 f6 c2 c4 e7 e6 b1 c3 f8 b4 e2 e3 c1 d2 d7 d5 c4 d5 f6 d5 f2 f3 b8 c6 g1 f3 f7 f5 g2 g4 f5 g4 d1 e2 d5 f4 d2 f4 e6 f5 e2 e5 b4 c3 e5 c3 d8 d3 e1 e2 d3 e2 e2 e2 f5 f4 e2 e1 f4 f3 e1 f2 a8 d8 f2 f1 c8 e6 f4 e3 f8 f7 f3 h4 e6 d5 f1 g2 f7 f3 g2 f1 f3 h3 f1 e1 d5 e4 e1 f2 h3 h4 g4 h5 h4 h5 f2 f3 h5 f5 f3 g2 f5 h5 e3 g5 h5 h3 g5 e3 h3 h4 e3 g5 e4 g2 g2 g3 g2 d5 g3 h3 d5 e6 g5 f6 g8 h7
    e2 e4 e7 e5 g1 f3 b8 c6 b1 c3 g8 f6 f1 b5 d7 d6 d2 d3 a7 a6 f8 e7 b5 a4 b7 b5 a4 b3 h2 h3 c8 b7 a2 a4 b5 b4 a4 b5 c6 b8 c1 g5 f6 e8 f3 e5 e8 d6 g5 f4 d8 e7 f4 d6 e7 d6 e5 f3 d6 e7 c3 a4 c7 c6 a4 c5 c6 c5 d3 d4 e7 e5 d4 c5 b7 c8 a1 c1 b8 d7 f3 e5 f7 f6 e4 f5 d7 f6 c1 c6 e5 e4 g1 h1 f6 h5 f1 e1 h5 g3 e5 d7 e4 h4 c6 c4 h4 g3 d1 g4 g3 h3 h1 g1 h3 h1 g1 h2 h1 h5 g4 f5 c8 b7 c4 c5 h5 e2 e1 d1 e7 f6 b2 b3 f6 e5 c5 c8 a8 c8

     

  • Drawing the embeddings. Fun, but not really useful. And this is kind of my point about embeddings like W2V that don’t take into account the trajectory of the sentence the word is part of. We know that the structure of the board is represented in the text. We need a more sophisticated embedding to to extract it

square_embeddings

  • Something that might make sense is to see how these points cluster as well
  • I think I might try plotting individual columns later, but first I’m going to try building some from/to networks by piece

GOES

  • Try downloading yaw flip
  • Was able to connect to the server, though now I don’t need a port number?
  • Specifying the queries. Fixed a few mnemonics.
  • Had to try a few times, but I got it!

influx_copy

  • Not sure what to do next. Update GVSETS paper?
  • 2:00 CASSIE meeting – learned a lot of things
  • I got promoted!
  • Implemented a perplexity measure. Looking at this as a way of understanding mode collapse, and maybe conspiracy theories?
  • Done for the day

Phil 6.10.20

Finish ACSOS review

GPT-2 Agents

  • Generate embeddings
    • Trying running much longer sequences (max_length = 1000). The lets games run long enough  that they often conclude (the term “resigns”, “wins”, or “draw occurs in the text)
    • Put together a simple regex ‘[a-h][1-8]’ that pulls out all the squares in sequence from a game
    • Extracting game square sequences to create files that will feed into Word2Vec. The class is started and most of the issues are worked out. I added a check for game endings so beginning and endings are not place together oddly.
    • Here’s the trimmed input text
      The game begins as white uses the Sicilian opening. and black countering with Najdorf, Adams attack. Loek Van Wely moves white pawn from e2 to e4. Black moves pawn from c7 to c5. In move 2, White moves knight from g1 to f3. Black moves pawn from d7 to d6. White moves pawn from d2 to d4. Black moves pawn from c5 to d4. Black takes white pawn. White moves knight from f3 to d4. White takes black pawn. Black moves knight from g8 to f6. In move 5, White moves knight from b1 to c3. Arseniy Nesterov moves black pawn from a7 to a6. Loek Van Wely moves white bishop from c1 to e3. Black moves pawn from e7 to e6. In move 7, White moves pawn from f2 to f4. Black moves knight from b8 to d7. White moves queen from d1 to d2. Black moves pawn from b7 to b5. Loek Van Wely queenside castles. Black moves bishop from f8 to e7. White moves bishop from f1 to d3. Arseniy Nesterov kingside castles. White moves king from c1 to b1. Black moves rook from a8 to b8. White moves pawn from g2 to g3. Black moves queen from d8 to a5. Loek Van Wely moves white king from b1 to a1. Black moves bishop from e7 to d6. Black takes white knight. Loek Van Wely moves white bishop from d3 to e4. White takes black pawn. Black moves rook from b8 to b2. Black takes white pawn. In move 17, White moves bishop from e4 to h7. White takes black pawn. Check. Arseniy Nesterov moves black king from g8 to h8. White moves bishop from h7 to d3. Black moves bishop from d6 to f4. Black takes white pawn. Check. In move 19, White moves bishop from e3 to f4. White takes black bishop. Black moves rook from b2 to f2. White moves rook from h1 to f1. Black moves knight from d7 to e5. White moves queen from d2 to e2. Black moves queen from a5 to d2. In move 22, White moves knight from c3 to e2. White takes black queen. Black moves rook from f2 to e2. Black takes white knight. Loek Van Wely moves white bishop from f4 to e3. Black moves rook from e2 to e3. Black takes white bishop. White moves pawn from f4 to f5. Black moves rook from f8 to d8. White moves pawn from a2 to a4. Arseniy Nesterov moves black bishop from c8 to b7. White moves pawn from a4 to a5. Arseniy Nesterov moves black bishop from b7 to c8. White moves pawn from a5 to b6. White takes. Arseniy Nesterov moves black pawn from a6 to b5. Black takes white pawn. White moves queen from e2 to b5. White takes black pawn. Black moves knight from e5 to c4. White moves pawn from h2 to h3. Black moves knight from c4 to a5. In move 30, Loek Van Wely moves white queen from b5 to a4. Arseniy Nesterov moves black pawn from h7 to h6. White moves bishop from d3 to b1. Black moves rook from d8 to d1. Check. Loek Van Wely
    • And here’s the sequence
      e2 e4 c7 c5 g1 f3 d7 d6 d2 d4 c5 d4 f3 d4 g8 f6 b1 c3 a7 a6 c1 e3 e7 e6 f2 f4 b8 d7 d1 d2 b7 b5 f8 e7 f1 d3 c1 b1 a8 b8 g2 g3 d8 a5 b1 a1 e7 d6 d3 e4 b8 b2 e4 h7 g8 h8 h7 d3 d6 f4 e3 f4 b2 f2 h1 f1 d7 e5 d2 e2 a5 d2 c3 e2 f2 e2 f4 e3 e2 e3 f4 f5 f8 d8 a2 a4 c8 b7 a4 a5 b7 c8 a5 b6 a6 b5 e2 b5 e5 c4 h2 h3 c4 a5 b5 a4 h7 h6 d3 b1 d8 d1

       

    • I can do other things like split into white and black, but that’s pretty tricky and I don’t think it’s worth it
  • Start building networks. Here are some api possibilities

GOES

  • If the devlab is still up, work on pulling down data. Nope, the VPN is working so badly today that I can’t even load my webmail
  • Going to work on the download and transfer using my local Influx – done!

influx_copy

Complete copy of remote data on local server

  • 2:00 Meeting