# Phil 7.24.20

I had home-grown tomatoes this morning!

And I hung up my shiny new diploma!

GPT-2 Agents

• I think it’s time to start writing the paper. Something like Synthetic Agents in Language Models: Navigating belief
• Using the IEEE(ACSOS) template
• Set up the paper with authors and dummy text. Starting to fill in the pieces
• Writing the methods section and needed to count the number of games (#draw + #resigns). The easiest way to do this was jut to count all the word frequencies. Here are the top terms:
to : 1474559
from : 1472081
moves : 1472071
white : 1062561
black : 1056840
pawn : 392494
in : 330044
move : 307701
takes : 307166
rook : 258476
knight : 250998
bishop : 225442
queen : 175254
king : 173837
pawn. : 145164
check. : 91512

• The list goes on a while. The most mentioned squares are d4 (56,224), d5(53,986), and f6(48,772)

God help me, I’m updating my IDE

GOES

• Need to start working on the mapping of rwheels to inertial(?) frame. The thing is, the yaw axis rotates 360 degrees every day, so what frame do we use? My thinking is that the inertial frame (as defined by the star tracker) is unchanging, but we have a rotating frame inside that . The satellite’s moves are relative to that rotating frame plus the inertial frame. So the satellite’s first task is to keep its orientation relative to the rotating frame, then execute commands with respect to that frame. So a stacked matrix of inertial frame, Earth frame, vehicle matrix and then a matrix for each of the rwheels?

# Phil 6.5.20

GPT-2 Agents

• Started a google doc for the GPT-2 Chess agents that will be grist for the paper(s)
• Create probes for each piece, like:
• white moves pawn from e2 to
• black moves pawn from e7 to
• A slightly more sophisticated parser will need to work with “The game begins
• I can take the results of multiple probes and store them in the table_moves, then run statistics by color, piece, etc
• Then see if it’s possible to connect one piece to another piece using a “from/to chain” across multiple pieces. There will probably be some sort of distribution where the median(?) value should be a set of adjacent squares.
• The connections can be tested by building adjacency matrices by piece and by move number range
• Started ChessMovesToDb. Might as well work on the tricky parse of  “The game begins “. Making progress. My initial thought on how to parse moves doesn’t handle weird openings like “4.e3, Gligoric system with 7…dc” Need to strip to the the first occurance of “move”, I think:
white uses the Nimzo-Indian opening. and black countering with 4.e3, Gligoric system with 7...dc. White moves pawn from d2 to d4. Black moves knight from g8 to f

GOES

• More timing tests
• Add explicit time to the write
• This should also be the basis of the system that will pull data from the DevLab Influx
• Continue search for important mnemonics during yaw flip – nope, still can’t log into DevLab influx
• Working on setting up arbitrary times spans and a new bucket for tests. Writing data works. Now I need to extend the number of series, tags, etc All the writes are working. I’ll do the reads Monday. Everything looks pretty consistent, though.

# Phil 2.6.20

7:00 – 4:00  ASRC GOES

Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks

• Evolution is a blind fitting process by which organisms become adapted to their environment. Does the brain use similar brute-force fitting processes to learn how to perceive and act upon the world? Recent advances in artificial neural networks have exposed the power of optimizing millions of synaptic weights over millions of observations to operate robustly in real-world contexts. These models do not learn simple, human-interpretable rules or representations of the world; rather, they use local computations to interpolate over task-relevant manifolds in a high-dimensional parameter space. Counterintuitively, similar to evolutionary processes, over-parameterized models can be simple and parsimonious, as they provide a versatile, robust solution for learning a diverse set of functions. This new family of direct-fit models present a radical challenge to many of the theoretical assumptions in psychology and neuroscience. At the same time, this shift in perspective establishes unexpected links with developmental and ecological psychology.

•  Defense
• Discussion slides
• contributions – done
• designing for populations 1 & 2- done
• Diversity and resilience- done
• Non-human agents- done
• Reflection and reflex- done
• Ethical considerations- done
• Ethics of diversity injection
• Ethics of belief space cartography
• GOES
• Status report
• Get signature from Aaron at 7:30

# Phil 11.8.19

7:00 – 3:00 ASRC GOES

• Dissertation
• Usability study! Done!
• Discussion. This is going to take some framing. I want to tie it back to earlier navigation, particularly the transition from stories and mappaemundi to isotropic maps of Ptolemy and Mercator.
• Sent Don and Danilo sql file
• Start satellite component list
• Evolver
• Adding threads to handle the GPU. This looks like what I want (from here):
import logging
import concurrent.futures
import time

time.sleep(2)

if __name__ == "__main__":
num_gpus = 1
format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO,
datefmt="%H:%M:%S")

logging.info("Main    : all done")

As you can see, it’s possible to have a thread for each gpu, while having them iterate over a larger set of tasks. Now I need to extract the gpu name from the thread info. In other words,  ThreadPoolExecutor-0_0 needs to map to gpu:1.

• Ok, this seems to do everything I need, with less cruft:
import concurrent.futures
import time
from typing import List
import re

last_num_in_str_re = '(\d+)(?!.*\d)'
prog = re.compile(last_num_in_str_re)

gpu_str = "gpu:{}".format(int(num.group(0))+1)
print("{}: starting on  {}".format(args["name"], gpu_str))
time.sleep(2)
print("{}: finishing on  {}".format(args["name"], gpu_str))

if __name__ == "__main__":
num_gpus = 5

print("Finished Main")

And that gives me:

task_0: finishing on  gpu:1, after sleeping 2.0 seconds
task_1: finishing on  gpu:2, after sleeping 2.1 seconds
task_2: finishing on  gpu:1, after sleeping 2.2 seconds
task_3: finishing on  gpu:2, after sleeping 2.3 seconds
task_4: finishing on  gpu:1, after sleeping 2.4 seconds
Finished Main

So the only think left is to integrate this into TimeSeriesMl2

# Phil 11.7.19

7:00 – 5:00 ASRC GOES

• Dissertation
• ML+Sim
• Save actual and inferred efficiency to excel and plot
• Create an illustration that shows how the network is trained, validated against the sim, then integrated into the operating system. (maybe show a physical testbed for evaluation?)
• Demo at the NSOF
• Went ok. Next steps are a sufficiently realistic model that can interpret an actual malfunction
• Put together a Google Doc/Sheet that has the common core elements that we can model most satellites (LEO, MEO, GEO, and HEO?). What are the common components between cubesats and the James Webb?
• Detection of station-keeping failure is a possibility
• Also, high-dynamic phases, like orbit injection might be low-ish fruit
• Tomorrow, continue on the GPU assignment in the evolver

# Phil 7.18.19

7:00 – 5:00 ASRC GEOS

• Started to fold Wayne’s comments in
• Working on the Kauffman section
• Tried making it so K can be higher than N with resampling and I still can’t keep the system from converging, which makes me think that there is something wrong with the code.
• Send reviews to Antonio – done
• Back to work on the physics model. Make sure to include a data dictionary mapping system to support Bruce’s concept
• Code autocompletion using deep learning
• A lot of flailing today but no good progress:

# Phil 7,11,19

7:00 – 4:30 ASRC GEOS

• Ping Antonio – Done
• Dissertation
• More Bones in a Hut. Found the online version of the Hi-Lo chapter from BIC. Explaining why Hi-Lo is different from IPD.
• More reaction wheel modeling
• Get flight and hotel for July 30 trip – Done
• So this is how you should install Panda3d and examples:
• First, make sure that you have Python 3.7 (64 bit! The default is 32 bit). Make sure that your path environment points to this, and not any other 3.x versions that you’re hoarding on your machine.
• pip install panda3d==1.10.3
• Then download the installer and DON’T select the python 3.7 support:
• Finish the install and verify that the demos run (e.g. python \Panda3D-1.10.3-x64\samples\asteroids\main.py):
• That’s it!
• Discussed DARPA solicitation HR001119S0053 with Aaron. We know the field of study – logistics and supply chain, but BD has totally dropped the ball on deadlines and setting up any kind of relationship with the points of contact. We countered that we could write a paper and present at a venue to gain access and credibility that way.
• There is a weekly XML from the FBO. Downloading this week’s to see if it’s easy to parse and search

# Phil 10.17.18

7:00 – 4:00 Antonio Workshop

# Phil 12.3.15

7:00 – 5:00 VTX

• Learning: Genetic Algorithms
• Rank space (probability is based on unsorted values??)
• Simulated annealing – reducing step size.
• Diversity rank (from the previous generation) plus fitness rank
• Some more timing results. The view test (select count(*) from tn_view_network_items where network_id = 1) for the small network_1 is about the same as the pull for the large network_8, about .75 sec. The pull from the association table without the view is very fast – 0.01 for network_1 and 0.02 for network_8. So this should mean that a 1,000,000 item pull would take 1-2 seconds.
• mysql> select count(*) from tn_associations where network_id = 1;
11
1 row in set (0.01 sec)

mysql> select count(*) from tn_associations where network_id = 8;
10000
1 row in set (0.01 sec)

mysql> select count(*) from tn_view_network_items where network_id = 8;
10000
1 row in set (0.88 sec)

mysql> select count(*) from tn_view_network_items where network_id = 1;
11
1 row in set (0.71 sec)
• Field trip to Wall NJ
• Learned more about the project, started to put faces to names
• Continued to look at DB engines for the derived DB. Discovered WebScaleSQL, which is a collaboration between Alibaba, Facebook, Google, LinkedIn, and Twitter to produce a big(!!) version of MySql.
• More discussions with Aaron D. about control systems, which means I’m going to be leaning on my NIST work again.