# Phil 11.15.19

7:00 – ASRC GOES

• Dissertation – starting the discussion section
• I’m thinking about objective functions and how individual and group objectives work together, particularly in extreme conditions.
• In extreme situations, the number of options available to an agent or group is diminished. There may be only one move apparently available in a chess game. A race car at the limits of adhesion has only one path through a turn. A boxer has a tiny window to land a blow. As the floodwaters rise, the range of options diminish. In a tsunami, there is only one option – run.
• Here’s a section from article 2 of the US Military Code of Conduct (from here):
• Surrender is the willful act of members of the Armed Forces turning themselves over to enemy forces when not required by utmost necessity or extremity. Surrender is always dishonorable and never allowed. When there is no chance for meaningful resistance, evasion is impossible, and further fighting would lead to their death with no significant loss to the enemy, members of Armed Forces should view themselves as “captured” against their will versus a circumstance that is seen as voluntarily “surrendering.”
• If a machine is trained for combat, will it have learned the concept of surrender? According to the USCoC, no, surrender is never allowed. A machine trained to “win”, like Google’s Alpha Go, do not learn to resign. That part has to be explicitly coded in (from Wired):
• According to David Silver, another DeepMind researcher who led the creation of AlphaGo, the machine will resign not when it has zero chance of winning, but when its chance of winning dips below 20 percent. “We feel that this is more respectful to the way humans play the game,” Silver told me earlier in the week. “It would be disrespectful to continue playing in a position which is clearly so close to loss that it’s almost over.”
• Human organizations, like armys and companies are a kind of superhuman intelligence, made up of human parts with their own objective functions. In the case of a company, that objective is often to maximise shareholder value (NYTimes by Milton Friedman):
• But the doctrine of “social responsibility” taken seriously would extend the scope of the political mechanism to every human activity. It does not differ in philosophy from the most explicitly collectivist doctrine. It differs only by professing to believe that collectivist ends can be attained without collectivist means. That is why, in my book “Capitalism and Freedom,” I have called it a “fundamentally subversive doctrine” in a free society, and have said that in such a society, “there is one and only one social responsibility of business – to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception fraud.”
• When any kind of population focuses singly on a particular goal, it creates shared social reality. The group aligns with the goal and pursues it. In the absence of the awareness of the environmental effects of this orientation, it is possible to stampede off a cliff, or shape the environment so that others deal with the consequences of this goal.
• It is doubtful that many people deliberately choose to be obese. However, markets and the profit motive have resulted in a series of innovations, ranging from agriculture to aisles of high-fructose corn syrup-based drinks at the local supermarket. The logistics chain that can create and sell a 12oz can of brand-name soda for about 35 cents is a modern miracle, optimized to maximize income for every link in the chain. But in this case, the costs of competition have created an infinite supply of heavily marketed empty calories. Even though we are aware at some level that we should rarely – if ever – have one of these beverages, they are consumed by the billions
• The supply chain for soda is a form of superintelligence, driven by a simple objective function. It is resilient and adaptive, capable of dealing with droughts, wars, and changing fashion. It is also contributing to the deaths of approximately 300,000 Americans annually.
• How is this like combat? Reflexive vs. reflective. Low-diversity thinking are a short-term benefit for many organizations, they enable first-mover advantage, which can serve to crowd out more diverse (more expensive) thinking. More here…

# Phil 11.14.19

7:00 – 3:30 ASRC GOES

• Dissertation – Done with Human Study!
• Evolver
• Work on parameter passing and function storing
• You can use the * operator before an iterable to expand it within the function call. For example:
timeseries_list = [timeseries1 timeseries2 ...]
r = scikits.timeseries.lib.reportlib.Report(*timeseries_list)
• Here’s the running code with variable arguments
def plus_func(v1:float, v2:float) -> float:
return v1 + v2

def minus_func(v1:float, v2:float) -> float:
return v1 - v2

def mult_func(v1:float, v2:float) -> float:
return v1 * v2

def div_func(v1:float, v2:float) -> float:
return v1 / v2

if __name__ == '__main__':
func_array = [plus_func, minus_func, mult_func, div_func]

vf = EvolveAxis("func", ValueAxisType.FUNCTION, range_array=func_array)
v1 = EvolveAxis("X", ValueAxisType.FLOAT, parent=vf, min=-5, max=5, step=0.25)
v2 = EvolveAxis("Y", ValueAxisType.FLOAT, parent=vf, min=-5, max=5, step=0.25)

for f in func_array:
result = vf.get_random_val()
print("------------\nresult = {}\n{}".format(result, vf.to_string()))
• And here’s the output
------------
result = -1.0
func: cur_value = div_func
X: cur_value = -1.75
Y: cur_value = 1.75
------------
result = -2.75
func: cur_value = plus_func
X: cur_value = -0.25
Y: cur_value = -2.5
------------
result = 3.375
func: cur_value = mult_func
X: cur_value = -0.75
Y: cur_value = -4.5
------------
result = -5.0
func: cur_value = div_func
X: cur_value = -3.75
Y: cur_value = 0.75
• Now I need to get this to work with different functions with different arg lists. I think I can do this with an EvolveAxis containing a list of EvolveAxis with functions. Done, I think. Here’s what the calling code looks like:
# create a set of functions that all take two arguments
func_array = [plus_func, minus_func, mult_func, div_func]
vf = EvolveAxis("func", ValueAxisType.FUNCTION, range_array=func_array)
v1 = EvolveAxis("X", ValueAxisType.FLOAT, parent=vf, min=-5, max=5, step=0.25)
v2 = EvolveAxis("Y", ValueAxisType.FLOAT, parent=vf, min=-5, max=5, step=0.25)

# create a single function that takes no arguments
vp = EvolveAxis("random", ValueAxisType.FUNCTION, range_array=[random.random])

# create a set of Axis from the previous function evolve args
axis_list = [vf, vp]
vv = EvolveAxis("meta", ValueAxisType.VALUEAXIS, range_array=axis_list)

# run four times
for i in range(4):
result = vv.get_random_val()
print("------------\nresult = {}\n{}".format(result, vv.to_string()))
• Here’s the output. The random function has all the decimal places:
------------
result = 0.03223958125899473
meta: cur_value = 0.8840652389671935
------------
result = -0.75
meta: cur_value = -0.75
------------
result = -3.5
meta: cur_value = -3.5
------------
result = 0.7762888191296017
meta: cur_value = 0.13200324934487906
• Verified that everything still works with the EvolutionaryOptimizer. Now I need to make sure that the new mutations include these new dimensions

• I think I should also move TF2OptimizationTestBase to TimeSeriesML2?
• Starting Human Compatible

# Phil 10.5.19

“Everything that we see is a shadow cast by that which we do not see.” – Dr. King

ASRC GOES 7:00 – 4:30

• Dissertation – more human study. Pretty smooth progress right now!
• Cleaning up the sim code for tomorrow – done. All the prediction and manipulation to change the position data for the RWs and the vehicle are done in the inference section, while the updates to the drawing nodes are separated.
• I think this is the code to generate GPT-2 Agents?: github.com/huggingface/transformers/blob/master/examples/run_generation.py

# Phil 11.4.19

7:00 – 9:00 ASRC GOES

• Cool thing: Our World in Data
• The goal of our work is to make the knowledge on the big problems accessible and understandable. As we say on our homepage, Our World in Data is about Research and data to make progress against the world’s largest problems.
• Dissertation – more human study
• This is super-cool: The Future News Pilot Fund: Call for ideas
• Between February and June 2020 we will fund and support a community of changemakers to test their promising ideas, technologies and models for public interest news, so communities in England have access to reliable and accurate news about the issues that matter most to them.
• October status report
• Sim + ML next steps:
• I can’t do ensemble realtime inference because I’d need a gpu for each model. This means that I need to get the best “best” model and use that
• Run the evolver to see if something better can be found
• Add “flywheel mass” and “vehicle mass” to dictionary and get rid of the 0.05 value – done
• Set up a second model that uses the inferred efficiency to move in accordance with the actual commands. Have them sit on either side of the origin
• Graphics are done
• Need to make second control system and ‘sim’ that uses inferred efficiency. Didn’t have to do all that. What I’m really doing is calculating rw angles based on the voltage and inferred efficiency. I can take the commands from the control system for the ‘actual’ satellite.

• ML seminar
• Showed the sim, which runs on the laptop. Then everyone’s status reports
• Meeting with Aaron
• Really good discussion. I think I have a handle on the paper/chapter. Added it to the ethical considerations section

# Phil 11.1.19

7:00 – 3:00 ASRC GOES

• Hugging Face: State-of-the-Art Natural Language Processing in ten lines of TensorFlow 2.0
• Hugging Face is an NLP-focused startup with a large open-source community, in particular around the Transformers library. 🤗/Transformers is a python-based library that exposes an API to use many well-known transformer architectures, such as BERTRoBERTaGPT-2 or DistilBERT, that obtain state-of-the-art results on a variety of NLP tasks like text classification, information extraction, question answering, and text generation. Those architectures come pre-trained with several sets of weights.
• Dissertation
• Starting on Human Study section!
• For once there was something there that I could work with pretty directly. Fleshing out the opening
• OODA paper:
• Maximin (Cass Sunstein)
• For regulation, some people argue in favor of the maximin rule, by which public officials seek to eliminate the worst worst-cases. The maximin rule has not played a formal role in regulatory policy in the Unites States, but in the context of climate change or new and emerging technologies, regulators who are unable to conduct standard cost-benefit analysis might be drawn to it. In general, the maximin rule is a terrible idea for regulatory policy, because it is likely to reduce rather than to increase well-being. But under four imaginable conditions, that rule is attractive.
1. The worst-cases are very bad, and not improbable, so that it may make sense to eliminate them under conventional cost-benefit analysis.
2. The worst-case outcomes are highly improbable, but they are so bad that even in terms of expected value, it may make sense to eliminate them under conventional cost-benefit analysis.
3. The probability distributions may include “fat tails,” in which very bad outcomes are more probable than merely bad outcomes; it may make sense to eliminate those outcomes for that reason.
4. In circumstances of Knightian uncertainty, where observers (including regulators) cannot assign probabilities to imaginable outcomes, the maximin rule may make sense. (It may be possible to combine (3) and (4).) With respect to (3) and (4), the challenges arise when eliminating dangers also threatens to impose very high costs or to eliminate very large gains. There are also reasons to be cautious about imposing regulation when technology offers the promise of “moonshots,” or “miracles,” offering a low probability or an uncertain probability of extraordinarily high payoffs. Miracles may present a mirror-image of worst-case scenarios.
• Reaction wheel efficiency inference
• Since I have this spiffy accurate model, I think I’m going to try using it before spending a lot of time evolving an ensemble
• Realized that I only trained it with a voltage of +1, so I’ll need to abs(delta)
• It’s working!

• Next steps:
• I can’t do ensemble realtime inference because I’d need a gpu for each model. This means that I need to get the best “best” model and use that
• Run the evolver to see if something better can be found
• Add “flywheel mass” and “vehicle mass” to dictionary and get rid of the 0.05 value
• Set up a second model that uses the inferred efficiency to move in accordance with the actual commands. Have them sit on either side of the origin
• Committed everything. I think I’m done for the day

# Phil 10.30.19

7:00 – 5:00 GOES

• Dissertation – finish up the maps chapter – done!
• Try writing up more expensive information thoughts (added to discussion section as well)
• Game theory comes from an age of incomplete information. Now we have access to mostly complete, but potentially expensive information
• Expense in time – throwing the breakers on high-frequency trading
• Expense in  – Buying the information you need from available resources
• Expensive in resources – developing the hardware and software to obtain the information (Operation Hummingbird to TPU/DNN development)
• By handing the information management to machines, we create a human-machine social structure, governed by the rules of dense/sparse,stiff/slack networks
• AI combat is a very good example of an extremely stiff network (varies in density) and the associated time expense. Combat has to happen as fast as possible, due to OODA loop constraints. But if the system does not have designed-in capacity to negotiate a ceasefire (on both/all sides!), there may be no way to introduce it in human time scales, even though the information that one side is losing is readily apparent.
• Online advertising is a case where existing information is hidden from the target of the advertiser, but available to the platform, and to a lesser degree, the client. Because this information asymmetry, the user’s behavior/beliefs are more likely to be exploited in a way that denies the user agency, while granting maximum agency to the platform and clients.
• Deepfakes, spam and the costs of identifying deliberate misinformation
• Call to action: the creation of an information environment impact body that can examine these issues and determine costs. This is too complex a process for the creators to do on their own, and there would be rampant conflict of interest anyway. But an EPA-like structure, where experts in this topic perform as a counterbalance to unconstrained development and exploitation of the information ecosystem
• The Knowledge, Analytics, Cognitive and Cloud Computing (KnACC) lab in the Information Systems department in UMBC aims to address challenging issues at the intersection of Data Science and Cloud Computing. We are located in ITE 415.
• GOES
• Start creating NN that takes pitch/roll/yaw star tracker deltas and tries to calculate reaction wheel efficiency
• input vector is dp, dr, dy. Assume a fixed timestep
• output vector is effp, effr, effy
• once everything trains up, try running the inferencer on the running sim and display “inferred RW efficiency” for each RW
• Broke out the base class parts of TF2OptimizerTest. I just need to generate the test/train data for now, no sim needed

big ending news for the day

# Phil 10.25.19

7:00 – 4:00 ASRC GOES

# Phil 10.24.19

Janelle Shane’s website

7:00 – ASRC GOES

• Dissertation
• Nice chapter on force-directed graphs here
• Explaining Strava heatmap.
• Also, added a better transition from Moscovici to Simon’s Ant and mapping. This is turning into a lot of writing…
• Explain approach for cells (sum of all agent time, and sum all unique agent visits)
• Explain agent trajectory (add to vector if cur != prev)
• Good discussion with Aaron about time series approaches to trajectory detection

# Phil 10.16.19

7:00 – ASRC GOES

• Listening to Rachel Maddow on City Arts and Lectures. She’s talking about the power of oil and gas, and how they are essentially anti-democratic. I think that may be true for most extracting industries. They are incentivised to take advantage of the populations that are the gatekeepers to the resource. Which is why you get corruption – it’s cost effective. This also makes me wonder about advertising, which regards consumers as the source to extract money/votes/etc from.
• Dissertation:
• Something to add to the discussion section. Primordial jumps are not just on the parts of an individual on a fitness landscape. Sometimes the landscape can change, as with a natural disaster. The survivors are presented with an entirely new fitness landscape,often devoid of competition, that they can now climb.
• This implies that sustained stylistic change creates sophisticated ecosystems, while primordial change disrupts that, and sets the stage for the creation of new ecosystems.
• Had a really scary moment. Everything with \includegraphics wouldn’t compile. It seems to be a problem with MikTex, as described here. The fix is to place this code after \documentclass:
\makeatletter
\def\set@curr@file#1{%
\begingroup
\escapechar\m@ne
\xdef\@curr@file{\expandafter\string\csname #1\endcsname}%
\endgroup
}
\def\quote@name#1{"\quote@@name#1\@gobble""}
\def\quote@@name#1"{#1\quote@@name}
\def\unquote@name#1{\quote@@name#1\@gobble"}
\makeatother
• Finished the intro simulation description and results. Next is robot stampedes, then adversarial herding
• Evolver
• Check on status
• Write abstract for GSAW if things worked out
• GOES-R AI/ML Meeting
• Lots of AIMS deployment discussion. Config files, version control, etc.
• AIMS / A2P Meeting
• Walked through report
• Showed video of the Deep Mind robot Rubik’s cube to drive homethe need for simulation
• Send an estimate for travel expenses for 2020

# Phil 10.15.19

7:00 – ASRC GOES

• Well, I’m pretty sure I missed the filing deadline for a defense in February. Looks like April 20 now?
• Dissertation – More simulation. Nope, worked on making sure that I actually have all the paperwork done that will let me defend in February.
• Evolver? Test? Done! It seems to be working. Here’s what I’ve got
• Ground Truth: Because the MLP is trained on a set of mathematical functions, I have a definitive ground truth that I can extend infinitely. It’s simple a set of ten sin(x) waves of varying frequency:

• All Predictions: If you read back through my posts, I’ve discovered how variable a neural network can be when it has the same architecture and training parameters. This variation is based solely on the different random initialization  of the weights between layers.
• I’ve put together a genetic-algorithm-based evolver to determine the best hyperparameters, but because of the variation due to initialization, I have to train an ensemble of models and do a statistical analysis just to see if one set of hyperparameters is truly better than another. The reason is easy to see in the following image. What you are looking at is the input vector being run through ten models that are used to calculate the statistical values of the ensemble. You can see that most values are pretty good, some are a bit off, and some are pretty bonkers.

• Ensemble Average: On the whole though, if you take the average of all the ensemble, you get a pretty nice result. And, unlike the single-shot method of training, the likelihood that another ensemble produced with the same architecture will be the same is much higher.

• This is not to say that the model is perfect. The orange curve at the top of the last chart is too low. This model had a mean accuracy of 67%. I’ve just kicked off a much longer run to see if I can find a better architecture using the evolver over 50 generations rather than just 2.
• Ok, it’s now tomorrow, and I have the full run of 50 generation. Things did get better. We end with a higher mean, but we also have a higher variance. This means that it’s possible that the architecture around generation 23 might actually be better:

• Because all the values are saved in the spreadsheet, I can try that scenario, but let’s see what the best mean looks like as an ensemble when compared to the early run:

• Wow, that is a lot better. All the models are much closer to each other, and appear to be clustered around the right places. I am genuinely surprised how tidy the clustering is, based on the previous “All Predictions” plot towards the top of this post. On to the ensemble average:

• That is extremely close to the “Ground Truth” chart. The orange line is in the right place, for example. The only error that I can see with a cursory visual inspection is that the height of the olive line is a little lower than it should be.
• Now, I am concerned that there may be two peaks in this fitness landscape that we’re trying to climb. The one that we are looking for is a generalized model that can fit approximate curves. The other case is that the network has simply memorized the curves and will blow up when it sees something different. Let’s test that.
• First, let’s revisit the training set. This model was trained with extremely clean data. The input is a sin function with varying frequencies, and the evaluation data is the same sin function, picking up where we cut off the training data. Here’s an example of the clean data that was used to train the model:

• Now let’s try noising that up, so that the model has to figure out what to do based on data that model has never seen before:

• Let’s see what happened! First, let’s look at all the predictions from the ensemble:

• The first thing that I notice is that it didn’t blow up. Although the paths from each model are somewhat different, each one got all the paths approximately right, and there is no wild deviation. The worst behavior (as usual?) is the orange band, and possibly the green band. But this looks like it should average well. Let’s take a look:

• That seems pretty good. And the orange / green lines are in the right place. It’s the blue, olive, and grey lines that are a little low. Still, pretty happy with this.
• So, ensembles seem to work very well, and make for resilient, predictable behavior in NN architectures. The cost is that there is much more time required to run many, many models through the system.
• Work on AI paper
• Good chat with Aaron – the span of approaches to the “model brittleness problem” can be described using three scenarios:
• Military: Models used in training and at the start of a conflict may not be worth much during hostilities
• Waste, Fraud, and Abuse. Clever criminals can figure out how not to get caught. If they know the models being used, they may be able to thwart them better
• Facial recognition and protest. Currently, protesters in cultures that support large-scale ML-based surveillance try to disguise their identity to the facial recognizers. Developing patterns that are likely to cause errors in recognizers and classifiers may support civil disobedience.
• Solving Rubik’s Cube with a Robot Hand (openAI)
• To overcome this, we developed a new method called Automatic Domain Randomization (ADR), which endlessly generates progressively more difficult environments in simulation. This frees us from having an accurate model of the real world, and enables the transfer of neural networks learned in simulation to be applied to the real world.

# Phil 10.11.19

7:00 – 5:00 ASRC

• Got Will’s extractor working last night
• A thought about how Trump’s popularity appears to be crumbling. Primordial jumps don’t have the same sort of sunk cost that stylistic change has. If one big jump doesn’t work out, try something else drastic. It’s the same or less effort than hillclimbing out of a sinking region
• Dissertation. Fix the proposal sections as per yesterday’s notes
• Evolver
• Write out model files as eval_0 … eval_n. If the new fitness is better than the old fitness, replace best_0 … best_n
• Which si turning out to be tricky. Had to add a save function to save at the right time in the eval loop

# Phil 10.10.19

7:00 – 4:00 ASRC GOES

• The Daily has an episode on how to detach from environmental reality and create a social reality stampede
• Dissertation, working on finishing up the “unexpected findings” piece of the research plan
• Tie together explore/exploit, the Three Patterns, and M&R three behaviors.
• Also, set up the notion that it was initially explore OR exploit, with no thought given to the middle ground. M&R foreshadowed that there would be, though
• Registered for Navy AI conference Oct 22
• Get together with Vadim to see how the physics are going on Tuesday?
• More evolver
• installed the new timeseriesML2
• The test run blew up with a tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at cwise_ops_common.cc:82 error. Can’t find any direct help, though maybe try this?
• Reduce your Batchsize of datagen.flow (by default set 32 so you have to set 8/16/24 )
• Figured it out – I’m saving models in memory. Need to write them out instead.
• Swing by campus and check on Will

# Phil 10.8.19

7:00 – 5:00 ASRC GOES

• Had a really good discussion in seminar about weight randomness and hyperparameter tuning
• Got  Will to show me the issue he’s having with the data. The first element of an item is being REPLACED INTO twice, and we’re not seeing the last one
• Chat with Aaron about the AI/ML weapons paper.
• He gave me The ethics of algorithms: Mapping the debate to read
• In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
• An issue that we’re working through is when an inert object like a hammer becomes something that has a level of (for lack of a better term) agency imbued by the creator, which creates a mismatch in the user’s head as to what should happen. The more intelligent the system, the greater the opportunity for mismatch. My thinking was that Dourish, in  Where the Action Is had some insight (pg 109):
• This aspect of Heidegger’s phenomenology is already known in HCI. It was one of the elements on which Winograd and Flores (1986) based their analysis of computational theories of cognition. In particular, they were concerned with Heidegger’s distinction between “ready-to-hand” (zuhanden) and “present-at-hand” (vorhanden). These are ways, Heidegger explains, that we encounter the world and act through it. As an example, consider the mouse connected to my computer. Much of the time, I act through the mouse; the mouse is an extension of my hand as I select objects, operate menus, and so forth. The mouse is, in Heidegger’s terms, ready-to-hand. Sometimes, however, such as when I reach the edge of the mousepad and cannot move the mouse further, my orientation toward the mouse changes. Now, I become conscious of the mouse mediating my action, precisely because of the fact that it has been interrupted. The mouse becomes the object of my attention as I pick it up and move it back to the center of the mousepad. When I act on the mouse in this way, being mindful of it as an object of my activity, the mouse is present-at-hand.
• Dissertation – working on Research Design. Turns out that I had done the pix but only had placeholder text.
• Left the evolver cooking last night. Hopefully results today, then break up the class and build the lazy version. Arrgh! Misspelled variable. Trying a short run to verify.
• That seems to work nicely:

• The mean improves from 57% to 68%, so that’s really nice. But notice also that the range from min to max on line 5 is between 100% and 20%. Wow.
• Here’s 50 generations. I need to record steps and best models. That’s next:

• Waikato meeting tonight. Chris is pretty much done. Suggested using word clouds to show group discussion markers

# Phil 10.7.19

AmeriSpeak® is a research panel where members get rewards when they share their opinions. Members represent their communities when they answer surveys online, through the AmeriSpeak App, or by phone. Membership on the AmeriSpeak Panel is by invitation only. Policymakers and business leaders use AmeriSpeak survey results to make decisions that impact our lives. Tell us your thoughts on issues important to you, your everyday life, and topics in the news such as health care, finance, education, technology, and society.

Commission launches call to create the European Digital Media Observatory The European Commission has published a call for tenders to create the first core service of a digital platform to help fighting disinformation in Europe. The European Digital Media Observatory will serve as a hub for fact-checkers, academics and researchers to collaborate with each other and actively link with media organisations and media literacy experts, and provide support to policy makers. The call for tenders opened on 1 October and will run until 16 December 2019.

ASRC GOES 7:00 – 7:00

• Expense Report!
• Call Erikson!
•  Dissertation
• Change safe to low risk
• Tweaking the Research Design chapter
• Evolver
• See if the run broke or completed this weekend – IT restarted the machine. Restarted and let it cook. I seem to have fixed the GPU bug, since it’s been running all day. It’s 10,000 models!
• Look into splitting up and running on AWS
• Rather than explicitly gathering ten runs each time for each genome, I could hash the runs by the genome parameters. More successful genomes will be run more often.
• Implies a BaseEvolver, LazyEvolver, and RigerousEvolver class
• Neural Network Based Optimal Control: Resilience to Missed Thrust Events for Long Duration Transfers
• (pdf) A growing number of spacecraft are adopting new and more efficient forms of in-space propulsion. One shared characteristic of these high efficiency propulsion techniques is their limited thrust capabilities. This requires the spacecraft to thrust continuously for long periods of time, making them susceptible to potential missed thrust events. This work demonstrates how neural networks can autonomously correct for missed thrust events during a long duration low-thrust transfer trajectory. The research applies and tests the developed method to autonomously correct a Mars return trajectory. Additionally, methods for improving the response of neural networks to missed thrust events are presented and further investigated.
• Ping Will for Thursday rather than Wednesday done – it seems to be a case where the first entry is being duplicated
• Arpita’s presentation:
• Information Extraction from unstructured text
• logfile analysis
• Why is the F1 score so low on open coding with human tagging?
• Annotation generation slide is not clear