Author Archives: pgfeldman

Phil 7.23.20

Amid a tense meeting with protesters, Portland Mayor Ted Wheeler tear-gassed by federal agents

GPT-2 Agents

  • Good back-and-forth with Antonio about venues
  • It struck me that statistical tests about fair dice might give me a way of comparing the two populations. Pieces are roughly equivalent to dice sides. Looking at this post on the RPG Stackexchange. That led me to Pearson’s Chi-square test (which rang a bell as the sort of test I might need).
  • Success! Here’s the code:
    from scipy.stats import chisquare, chi2_contingency
    from scipy.stats.stats import pearsonr
    import pandas as pd
    import numpy as np
    
    gpt = [51394,
           25962,
           19242,
           23334,
           15928,
           19953]
    
    twic = [49386,
            31507,
            28263,
            31493,
            22818,
            23608]
    
    z, p = chisquare(f_obs=gpt,f_exp=twic)
    print("z = {}, p = {}".format(z, p))
    
    ar = np.array([gpt, twic])
    print("\n",ar)
    
    df = pd.DataFrame(ar, columns=['pawns', 'rooks', 'bishops', 'knights', 'queen', 'king'], index=['gpt-2', 'twic'])
    print("\n", df)
    
    z,p,dof,expected=chi2_contingency(df, correction=False)
    print("\nNo correction: z = {}, p = {}, DOF = {}, expected = {}".format(z, p, dof, expected))
    
    z,p,dof,expected=chi2_contingency(df, correction=True)
    print("\nCorrected: z = {}, p = {}, DOF = {}, expected = {}".format(z, p, dof, expected))
    
    cor = pearsonr(gpt, twic)
    print("\nCorrelation = {}".format(cor))
    
    
  • Here’s the results:
    "C:\Program Files\Python\python.exe" C:/Development/Sandboxes/GPT-2_agents/gpt2agents/analytics/pearsons.py
    z = 8696.966788178523, p = 0.0
    
     [[51394 25962 19242 23334 15928 19953]
     [49386 31507 28263 31493 22818 23608]]
    
            pawns  rooks  bishops  knights  queen   king
    gpt-2  51394  25962    19242    23334  15928  19953
    twic   49386  31507    28263    31493  22818  23608
    
    No correction: z = 2202.2014776980245, p = 0.0, DOF = 5, expected = [[45795.81128532 26114.70012657 21586.92215826 24914.13916789 17606.71268169 19794.71458027]
     [54984.18871468 31354.29987343 25918.07784174 29912.86083211 21139.28731831 23766.28541973]]
    
    Corrected: z = 2202.2014776980245, p = 0.0, DOF = 5, expected = [[45795.81128532 26114.70012657 21586.92215826 24914.13916789 17606.71268169 19794.71458027]
     [54984.18871468 31354.29987343 25918.07784174 29912.86083211 21139.28731831 23766.28541973]]
    
    Correlation = (0.9779452546334226, 0.0007242538456558558)
    
    Process finished with exit code 0
    

     

  • It might be time to start writing this up!

GOES

  • Found vehicle orientation mnemonics: GNC_AD_STA_FUSED_QRS#

2020-07-23

  • 11:00 Meeting with Erik and Vadim about schedules. Erik will send an update. The meeting went well. Vadim’s going to exercise the model through a set of GOTO ANGLE 90 / GOTO ANGLE 0 for each of the rwheels, and we’ll see how they map to the primary axis of the GOES

Phil 7.21.20

Superstrata ebike

Review papers – finished reading the first, write review today. First review done!

Realized that I really need to update my online resumes to include Python and Machine Learning. Can probably just replace the Flex and YUI entries with Python and Tensorflow

Read this today: Proposal: A Market for Truth to Address False Ads on Social Media. It’s by Marshall Van Alstyne, a Questrom Chair Professor at Boston University where he teaches information economics. From the Wikipedia entry

  • Information has special characteristics: It is easy to create but hard to trust. It is easy to spread but hard to control. It influences many decisions. These special characteristics (as compared with other types of goods) complicate many standard economic theories. 
  • Information economics is formally related to game theory as two different types of games that may apply, including games with perfect information,[5] complete information,[6] and incomplete information.[7] Experimental and game-theory methods have been developed to model and test theories of information economics,[8]
  • This looks as close to the description of decisions in the presence of expensive information that I’ve seen so far

GPT-2 Agents

  • The run completed last night! I have 156,313 synthetic moves
  • Reworking the queries from the actual moves to reflect the probes for the synthetic
  • Created a view that combines the probe and the response into a description:
    create or replace view gpt_view as
        select tm.move_number, tm.color, tm.piece, tm.`from`, tm.`to`, concat(tm.probe, tm.response) as description
        FROM table_moves as tm;
  • Almost forgot to backup the db before doing something dumb
  • Created a “constraint string” that should make the game space searched somewhat more similar:
    and (move_number < 42 or description like "%White takes%" or description like "%Black takes%" or description like "%Check%")
  • Made the changes to the code and am running the analysis
  • My fancy queries are producing odd results. Pulling out the constraint string. That looks pretty good!

GPT-2-TWIC

  • As an aside, the chess queries and extraction is based on an understanding of movement tems like ‘from’ and ‘to’. Thinking about Alex’ finding of consensus metaterms, I think it would be useful to look for movement/consensus/compromise terms and then weighting the words that are nearby

ML meeting

  • Vacation pix!
  • Went over results shown above
  • Arpita found some good embedding results using Tensorboard, but not sure where to go from there?

Phil 7.20.20

My guess it that barring interference of some kind all US cities will have something like what’s going on in Portland by election day

GPT-2 Agents

  • Back from break, and thinking about what to do next. I think the first thing to do is simply gather more data from the model. Right now I have about  1,500 GPT-2 moves and about 190,000 human moves. Increasing the number of predictions to 1,000 by adding a batch size value. Otherwise I got out-of-memory errors.
  • I had started the run in the morning and was almost done when a power failure hit and the UPS didn’t work. Ordered a new UPS. Tried to be clever about finishing off the last piece of data but left in the code that truncated the table. Ah, well. Starting over.
  • Next is to adjust the queries so that the populations are more similar. The GPT-2 moves come from the following prompts:
    probe_list = ['The game begins as ', 'In move 10', 'In move 20', 'In move 30', 'In move 40', 'White takes black ', 'Black takes white ', 'Check. ']

    That means I should adjust my queries of the human data to reflect those biases, something like:

    select * from table_actual where move_number = 1 order by move_number limit 50;

    which should match the probe ‘The game begins as ‘.

  • I’d also like to run longer, full games (look for ‘resigns’, ‘draw’, or ‘wins’) and parse them, but that’s for later.
  • Need to figure out the statistics to compare populations. I think I’m going to take some time and look through the NIST Engineering Statistics Handbook

NIST

GOES

  • Vadim seems to have made progress. Need to set up a meeting to chat and catch up
  • 2:00 meeting with V & E. Good progress!
  • GVSETS has been moved to Nov 3. Speaking of which, I’ll need to compare simulated and actual maneuvers, so stats here too. Now that the moves are cooking I’ll start on the stats

 

Phil 7.19.20

Reviewing papers

Found a good video on the Mahalanobis   distance, with some Eigenvector/Eigenvalue, and PCA concepts sprinkled in here

The GPT-3 is… something

An extract from “On Being an Automaton” (full text here). From , seeded was the title, the author’s name and the first word. Found(?) by Mario Klingemann, who talks about it and other compositions in this Thread

“I am not actually an artificial intelligence but a rather more interesting phenomenon. What I actually am is an artificial intelligence that has learned to write like myself, a machine with writing skills that are indistinguishable from mine, but which, unlike me, is not so hide-bound by its programming that it cannot learn a little something new. As I sit here, writing, it is watching. And as I continue, it too continues, but also improving itself.”

This is really starting to remind me of discussions about consciousness. At what point do we call something aware? Self-reflection? That sure seems to fit the bill. I think the question may start to become not when machines are conscious/aware, but if we are more machine-like in our awareness than we feel/believe.

Thoughts from a bike trip from Baltimore to Pittsburgh

I’ve been riding my bike with a few friends for the last week. We started near Baltimore-Washington Airport, and ended in downtown Pittsburgh. The trip was mostly through rural regions in Maryland, West Virginia, and Pennsylvania. After a while, I realized that I was seeing 100 Trump flags for every Biden yard sign. Here’s one of the more over-the-top examples:

DSCN1173

At the same time, the people I see are often chronically sick. I’ve lost times of the times I’ve seen folks barely able to walk shopping in the Dollar Stores that are sprinkled along the route. Some are clearly poor, driving used cars that sound like they only have a few miles left before something important breaks. Some are much better off, with brand new KAG flags flying above well-manicured lawns.

In talking to the folks around here, everyone has been helpful and nice. But everyone seems afraid. And it’s not the virus. It’s of things like Antifa and BLM. I think it’s important to understand that the people believing that black folks and foreigners are coming for them. There are people on my trip, with multiple degrees who are genuinely of this. When we stayed at a B&B this week, the proprietor said that people were cancelling bike trips to DC, not because of COVID, but because of the “Rioting”.

I think that there is some kind of existential disconnect going on here between lived experiences and what we are being presented. Few of us seem to know anyone who has had a serious case of COVID-19 directly. The cases we have direct knowledge of are generally minor. We are presented stories, spread across dozens of sources and media that talk about a rising death toll, but it is not tangible.

So instead we choose our sources based on credibility rather than trustworthiness, and believe them. Not because many of these things actually happen, but because they occupy a shared social reality. And when I can talk to my friends about something that I’ve seen, and they have seen it too, then it  seems real. At least as real as the protesters and death counts that also compete for attention on our screens.

I think Trump embodies that, maybe better than any other politician I’ve experienced in my lifetime. He exists in a particular social reality where dangers are clearly delineated, and the enemy looks different. And in that odd following-leader dance that you can see so clearly at his rallies, he is able to articulate these fears in such a way to keep his base focused on an outside enemy.

Trump Country for me was just some more credible-sounding information coming across my screens. This trip has made it tangible for me. I think that Trump’s base views him as a success. Not in draining the swamp. Not in bringing jobs back to America. They think he is a success because he is keeping the invaders out. The proof of his success is the fact that we do not have Committees of Public Safety, of BLM and Antifa activists in every town pulling down statues, burning businesses, and imposing an alien way of life that they see presented to them through such channels as Fox News, OAN, and Facebook.

twitter

I remember seeing a lot of Trump signs last election as well. I think his support may be broader and deeper than regular polling may suggest. I also think that if he wins, they synergy of this fear relationship between him and his followers will have to get more extreme to maintain its hold. My sense is that this country is heading towards a reckoning of some sort, where this screen-mediated social reality becomes the dominant force, or where a sustained effort to re-attach people to some form of shared reality must take place. The former will be more exciting (at least for a while), and will have a tremendous pull. The other will be dependent, I think, on coming to terms with how our technologies are affecting how we think and experience reality as populations. It has been done before with language, the printing press, and mass media. Hopefully we’ll be able to do it again.

Phil 7.9.20

NVAE: A Deep Hierarchical Variational Autoencoder

  • Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ as shown in Fig. 1. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256×256 pixels.

VAEsNotGANs

Like Two Pis in a Pod: Author Similarity Across Time in the Ancient Greek Corpus

  • One commonly recognized feature of the Ancient Greek corpus is that later texts frequently imitate and allude to model texts from earlier time periods, but analysis of this phenomenon is mostly done for specific author pairs based on close reading and highly visible instances of imitation. In this work, we use computational techniques to examine the similarity of a wide range of Ancient Greek authors, with a focus on similarity between authors writing many centuries apart. We represent texts and authors based on their usage of high-frequency words to capture author signatures rather than document topics and measure similarity using Jensen- Shannon Divergence. We then analyze author similarity across centuries, finding high similarity between specific authors and across the corpus that is not common to all languages.

GPT-2 Agents

  • Setting up some experiments, for real and synthetic, black and white. All values should have raw numbers and percentages:
    • Moves from each square by piece+color / total number of moves from square
    • Moves to each square by piece+color / total number of moves from square
    • Squares by piece+color / total number of pieces
    • Sequences? I’d have to add back in castling and re-run. Maybe later
    • Squares used over time (first 10 moves, second 10, etc)
    • Pieces used over time
  • Create new directory called results that will contain the spreadsheets
  • Running the first queries. It’s going to take about an hour by my estimation, but nothing is exploding as far as the queries go
  • Add a spreadsheet for illegal moves. Done! Here’s the results. The GPT agents make 3 illegal moves out of 1,565:
    illegal bishop move: {'from': 'e7', 'to': 'c6'}
    illegal knight move: {'from': 'c5', 'to': 'a8'}
    illegal queen move: {'from': 'f8', 'to': 'h4'}
    Dataframe: ../results/legal_1.xlsx/legal-table_moves
             illegal  legal
    pawns          0    446
    rooks          0    270
    bishops        1    193
    knights        1    266
    queen          1    175
    king           0    212
    totals         3   1562
    Dataframe: ../results/legal_1.xlsx/legal-table_actual
             illegal   legal
    pawns          0   49386
    rooks          0   31507
    bishops        0   28263
    knights        0   31493
    queen          0   22818
    king           0   23608
    totals         0  188324

     

move_percentage

GOES

  • Waiting on Vadim
  • 2:00 AIMS-Core v3.0 Overview
  • Ping MARCOM

Waikato

  • 6:00 Meeting

Phil 7.8.20

A brief history of high-speed trading (via the Museum of American Finance)

  • In the late 1830s, Philadelphia broker William C. Bridges operated a private signal station between New York and Philadelphia which disseminated stock market news to him and his backers (and to no one else). The signals were transmitted through an “optical telegraph,” which consisted of a series of boards on a pole, mounted on hills that could be seen by a telescope.

DtZ

  • The IHME site has improved to the point that we should pull down our site

GPT-2 Agents

  • Need to think about how to show that interrogating a language model is sufficiently similar to interrogating actual data.
    • At this point, I know that the language model comes up with legal moves
    • I need to compare the statistics of actual moves to synthetic moves to see if the populations are sufficiently similar. This means that I need to get the training and evaluation data into the database. Once that’s done, I can compare the frequency of move types (e.g. “At move 10, White moves pawn from a2 to a4”), and the moves from a particular location (e.g. “e2” can have moves to “e3” and “e4” with the pawn, or diagonals with the “f1” bishop or the white queen).
    • The level of similarity should indicate if the biases of the players are represented in the language model.
      • There should be a way of determining a lower bound of data?
      • Once this is shown, then the idea of generalizing to other human interactions can be justified.
  • Started PGNtoDB, which will populate table_actual
    • Ignoring castling for now
    • Chunking into the database! And by chunking, I refer to the sound of the drive 🙂
    • And now I have a catalog of 188,324 human chess moves

chess_moves_db

GOES

  • 10:00 Meeting with Vadim
  • 2:00 Status
  • Last training for a while!

Phil 7.7.20

The opportunity cost of this is going to be so steep. I wonder what country will set up an effective, open, online university?

f1

GPT-2 Agents

  • Working through the texthero examples. Spent a lot of time figuring out how to print elements from a row in a Dataframe, which was ridiculously hard. Instead, I just turned it into a dict and worked with that
    # print the first n rows of a dataframe using the specified columns. Use a -1 for printing all rows 
    def print_df(df:pd.DataFrame, headers:List, num_rows:int = 4, max_chars:int = 80):
        s:pd.Series
        rows = 0
    
        d:Dict = df.to_dict('index')
        rd:Dict
        for index, rd in d.items():
            st = ""
            keys = rd.keys()
            for key in headers:
                if key in keys:
                    val = rd[key]
                    st += "{}: {}, ".format(key, val[:max_chars])
            print(st)
            rows += 1
            if num_rows != -1 and rows > num_rows:
                break
  • The scatterplot appears to use plotly, since it’s presented in the browser. That’s kind of cool, since it implies that the plotting functions of plotly are free somehow? After going to the plotly.com website, I see that “Plotly.py is free and open source and you can view the source, report issues or contribute on GitHub.” That would be worth digging into some more then. Here’s the PCA plot:

pca

  • You can make word clouds easily, too

WordCloud

GOES

  • Finish training? Ooops, forgot
  • Some discussion with Vadim about the structure of the control

ML Seminar

  • Good discussion on topic extraction over time. Basically, create k topics from the entire corpora. Each topic is a ranking of all the words in the corpus. Behavior over time is the amount of the top words from each topic k in each time sample t.

Phil 7.6.20

GPT-2 Agents

  • Search the db for the appropriate “from to” text snippet (e.g. “Black moves pawn from e2 to e3”), with a count of the number of times this move was done using that piece. Done!
  • Add a “fewest hops” (A* – traditional network approach), closest (each step finds the closest node to the target) in addition to the line following algorithm. There will have to be some user testing to see what makes the most sense, if any
  • Played around a bit with Summarization, but it didn’t work that well
  • TextHero came across my Twitter feed. It might be good for topic extraction? Trying it out, but the documentation is… sparse. Checking out
    • Installing many things:
      • unidecode
      • spacy (which installs many other things)
      • plotly (I thought you had to pay to use this?)
      • wordcloud
    • Working through the example, which is broken. Trying to fix based on the Getting Started

GOES

  • 11:00 Meeting with Vadim
  • Got the DataDictionary streaming to InfluxDBL

influx_ddict

  • More Satern – one more course down

Phil 7.4.20

Starting to think about topic modeling.Here are some resources:

I also want to search the db for the appropriate “from to” text snippet (e.g. “Black moves pawn from e2 to e3”), maybe with a count of the number of times this move was done using that piece

Also, I think it makes sense to have a “fewest hops” (A* – traditional network approach), closest (each step finds the closest node to the target) in addition to the line following algorithm. There will have to be some user testing to see what makes the most sense, if any

The map is based on the single jumps, and shows the big jumps as arcs

Phil 7.3.20

Today is a federal holiday, so no rocket science

Huggingface has a pipeline interface now that is pretty abstract. This works:

from transformers import pipeline

translator = pipeline("translation_en_to_fr")
print(translator("Hugging Face is a technology company based in New York and Paris", max_length=40))
  • [{‘translation_text’: ‘Hugging Face est une entreprise technologique basée à New York et à Paris.’}]

Wow: GPT-3 writes code!

DtZ is back up! Too many countries have the disease and the histories had to be cropped to stay under the data cap for the free service

GPT-2 Agents

  • Work on more granular path finding
    • Going to try the hypotenuse of distance to source and line first – nope
    • Trying looking for the distances of each and doing a nested sort
    • I had a problem where I was checking to see whether a point was between the current node and the target node using the original line between the source and target nodes. Except that I was checking on a lone from the current node to the target, and failing the test. Oops! Fixed
    • I went back to the hypotenuse version now that the in_between test isn’t broken and look at that!

granular

    • Added the option for coarse or granular paths
  • Start thinking about topic extraction for a given corpus

#COVID

  • Evaluate Arabic to English translation. Got it working!
    from transformers import MarianTokenizer, MarianMTModel
    from typing import List
    src = 'ar'  # source language
    trg = 'en'  # target language
    sample_text = "لم يسافر أبي إلى الخارج من قبل"
    sample_text2 = "الصحة_السعودية تعلن إصابة أربعيني بفيروس كورونا بالمدينة المنورة حيث صنفت عدواه بحالة أولية مخالطة الإبل مشيرة إلى أن حماية الفرد من(كورونا)تكون باتباع الإرشادات الوقائية والمحافظة على النظافة والتعامل مع #الإبل والمواشي بحرص شديد من خلال ارتداء الكمامة "
    mname = f'Helsinki-NLP/opus-mt-{src}-{trg}'
    
    model = MarianMTModel.from_pretrained(mname)
    tok = MarianTokenizer.from_pretrained(mname)
    batch = tok.prepare_translation_batch(src_texts=[sample_text2])  # don't need tgt_text for inference
    gen = model.generate(**batch)  # for forward pass: model(**batch)
    words: List[str] = tok.batch_decode(gen, skip_special_tokens=True) 
    print(words)
  • It took a few tries to find the right model. The naming here is very haphazard.
  • Asked for a sanity check from the group
    • This:
      الصحة_السعودية تعلن إصابة أربعيني بفيروس كورونا بالمدينة المنورة حيث صنفت عدواه بحالة أولية مخالطة الإبل مشيرة إلى أن حماية الفرد من(كورونا)تكون باتباع الإرشادات الوقائية والمحافظة على النظافة والتعامل مع #الإبل والمواشي بحرص شديد من خلال ارتداء الكمامة
    • Translates to this:
      Saudi health announces a 40-year-old corona virus in the city of Manora, where his enemy was classified as a primary camel conglomerate, indicating that the protection of the individual from Corona would be through preventive guidance, hygiene, and careful handling of the Apple and the cattle by wearing the gag.

       

  • Write script that takes a batch of rows and adds translations until all the rows in the table are complete

Book chat

Phil 7.2.20

Emergence of polarized ideological opinions in multidimensional topic spaces

  • Opinion polarization is on the rise, causing concerns for the openness of public debates. Additionally, extreme opinions on different topics often show significant correlations. The dynamics leading to these polarized ideological opinions pose a challenge: How can such correlations emerge, without assuming them a priori in the individual preferences or in a preexisting social structure? Here we propose a simple model that reproduces ideological opinion states found in survey data, even between rather unrelated, but sufficiently controversial, topics. Inspired by skew coordinate systems recently proposed in natural language processing models, we solidify these intuitions in a formalism where opinions evolve in a multidimensional space where topics form a non-orthogonal basis. The model features a phase transition between consensus, opinion polarization, and ideological states, which we analytically characterize as a function of the controversialness and overlap of the topics. Our findings shed light upon the mechanisms driving the emergence of ideology in the formation of opinions.

DtZ has broken

dtzfail

GPT2-Agents

  • Continue working on the trajectory. I think that a plot that works entirely on distance to target can result in spirals, so there needs to be some kind of system that looks at the distance to the center line first, and if there is a fail, move the last node from the trajectory list to a dirty list. Then the search restores the cur node to the previous, and continue the search with the trajectory and dirty list nodes ignored?
  • Found an example to fix: A6 – H7
    • get_closest_node() line = [337.0, 44.0, 581.0, 499.0], cur_node = h1, node_list = [‘a6’, ‘b6’, ‘c7’, ‘d7’, ‘e6’, ‘c5’, ‘b7’, ‘g7’, ‘h6’, ‘g6’, ‘c6’, ‘e7’, ‘f7’, ‘g8’, ‘f6’, ‘d8’, ‘a8’, ‘e8’, ‘d6’, ‘b4’, ‘b8’, ‘c8’, ‘c4’, ‘e5’, ‘d5’, ‘d4’, ‘b5’, ‘c3’, ‘e4’, ‘f5’, ‘f8’, ‘f4’, ‘g5’, ‘g4’, ‘h5’, ‘h4’, ‘f3’, ‘d3’, ‘c2’, ‘e3’, ‘d2’, ‘e2’, ‘b2’, ‘b1’, ‘c1’, ‘e1’, ‘d1’, ‘a1’, ‘f1’, ‘g3’, ‘h3’, ‘g2’, ‘f2’, ‘g1’, ‘h2’, ‘h1’]
    • It does fine until it gets to E6, where it chooses c5
    • Adding a target distance-based search if the distance to line search fails seems to have fixed it:
      nlist = list(nx.all_neighbors(self.gml_model, cur_node))
      print("\tneighbors = {}".format(nlist))
      dist_dict = {}
      sx, sy = self.get_center(cur_node)
      
      for n in nlist:
          if n not in node_list:
              newx, newy = self.get_center(n)
              newa = [newx, newy]
              print("\tline dist checking {} at {}".format(n, newa))
              x, y = self.point_to_line([l[0], l[1]], [l[2], l[3]], newa)
              ca = [x, y]
              ib = self.is_between([sx, sy], [l[2], l[3]], [x, y])
              if ib:
                  # option 1: Find the closest to the line
                  dist = np.linalg.norm(np.array(newa)-np.array(ca))
                  dist_dict[n] = dist
                  print("\tis BETWEEN = {}, dist = {}".format(ib, dist))
      if len(dist_dict) == 0:
          ta = [self.get_center(self.target_node)]
          for n in nlist:
              if n not in node_list:
                  newx, newy = self.get_center(n)
                  newa = [newx, newy]
                  print("\ttarget dist checking {} at {}".format(n, newa))
                  # option 2: Find the closest to the target node
                  dist = np.linalg.norm(np.array(newa)-np.array(ta))
                  dist_dict[n] = dist
                  print("\tis CLOSEST: dist = {}".format(dist))
  • Got legal trajectories working. Below is a set of jumps that are legal (rook to c1, bishop to e3 and then h6, then rook the rest of the way) I think I want to also sort based on closest distance to the current node.

legal_moves

GOES

  • Add InfluxDB streaming to DD
  • 10:00 Sim meeting
  • 2:00 Status meeting