Category Archives: Tensorflow

Phil 1.17.20

An ant colony has memories that its individual members don’t have

  • Like a brain, an ant colony operates without central control. Each is a set of interacting individuals, either neurons or ants, using simple chemical interactions that in the aggregate generate their behaviour. People use their brains to remember. Can ant colonies do that? 

7:00 – ASRC

  •  Dissertation
    • More edits
    • Changed all the overviews so that they also reference the section by name. It reads better now, I think
    • Meeting with Thom
  • GPT-2 Agents
  • GSAW Slide deck

Phil 1.15.20

I got invited to the TF Dev conference!

The HKS Misinformation Review is a new format of peer-reviewed, scholarly publication. Content is produced and “fast-reviewed” by misinformation scientists and scholars, released under open access, and geared towards emphasizing real-world implications. All content is targeted towards a specialized audience of researchers, journalists, fact-checkers, educators, policy makers, and other practitioners working in the information, media, and platform landscape.

  • For the essays, a length of 1,500 to 3,000 words (excluding footnotes and methodology appendix) is appropriate, but the HKS Misinformation Review will consider and publish longer articles. Authors of articles with more than 3,000 words should consult the journal’s editors before submission.

7:00 – ASRC GOES

  •  Dissertation
    • It looks like I fixed my LaTeX problems. I went to C:\Users\phil\AppData\Roaming\MiKTeX\2.9\tex\latex, and deleted the ifvtex folder. Re-ran, things installed, and all is better now
    • Slides
  • GOES
    • Pinged Isaac about the idea of creating scenarios that incorporate the NASA simulators
    • Meeting
  • GSAW
    • Slides
    • Speakers presenting in a plenary session are scheduled to speak for 15 minutes, with five additional minutes allowed for questions and answers from the audience
    • Our microphones work best when the antenna unit is clipped to a belt and the microphone is attached near the center of your chest.
    • We are NOT providing network capabilities such as WiFi. If you require WiFi, you are responsible for purchasing it from the hotel and ensuring that it works for the presentation.
    • Charts produced by the PC version of Microsoft PowerPoint 2013, 2016 or 365 are preferred
    • . In creating your slides, note that the presentation room is large and you should consider this in your selection of larger fonts, diagram size, etc. At a minimum, a 20-point font is recommended
  • GPT-2 – Maybe do something with Aaron today?

Phil 1.13.21

7:00 – 6:30 ASRC GOES

  • Dissertation
  • GOES
    • New board is not showing up. Yay, it shows up if I remove the old board and put it in the old position
    • Ordered a 1,000 watt power supply

 

Phil 1.2.20

7:00 – 4:30 ASRC PhD

  • More highlighting and slides. Once I get through the Background section, I’ll write the overview, then repeat that patterns.
    • I’m tweaking too much text to keep the markup version. Sigh.
    • Finished Background and sent that to Wayne
  • GPT-2 Agents. See if we can get multiple texts generated – nope
    • Build a corpus of .txt files
    • Try running them through LMN
  • No NOAA meeting
  • No ORCA meeting

Phil 1.30.19

7:00 – 7:00 ASRC PhD

ClimateTree

  • Nice visualization, with map-like aspects: The Climate Learning Tree
  •  Dissertation
    • Start JuryRoom section – done!
    • Finished all content!
  • GPT-2 Agents
    • Download big model and try to run it
    • Move models and code out of the transformers project
  • GOES
    • Learning by Cheating (sounds like a mechanism for simulation to work with)
      • Vision-based urban driving is hard. The autonomous system needs to learn to perceive the world and act in it. We show that this challenging learning problem can be simplified by decomposing it into two stages. We first train an agent that has access to privileged information. This privileged agent cheats by observing the ground-truth layout of the environment and the positions of all traffic participants. In the second stage, the privileged agent acts as a teacher that trains a purely vision-based sensorimotor agent. The resulting sensorimotor agent does not have access to any privileged information and does not cheat. This two-stage training procedure is counter-intuitive at first, but has a number of important advantages that we analyze and empirically demonstrate. We use the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art on the CARLA benchmark and the recent NoCrash benchmark. Our approach achieves, for the first time, 100% success rate on all tasks in the original CARLA benchmark, sets a new record on the NoCrash benchmark, and reduces the frequency of infractions by an order of magnitude compared to the prior state of the art. For the video that summarizes this work, see this https URL
  • Meeting with Aaron
    • Overview at the beginning of each chapter – look at Aaron’s chapter 5 for
    • example intro and summary.
    • Callouts in text should match the label
    • hfill to right-justify
    • Footnote goes after puntuation
    • Punctuation goes inside quotes
    • for url monospace use \texttt{} (perma.cc)
    • indent blockquotes 1/2 more tab
    • Non breaking spaces on names
    • Increase figure sizes in intro

Phil 12.27.19

ASRC PhD 7:00 –

  • The difference between “more” (low dimension stampede-ish), and “enough” (grounded and comparative) – from Rebuilding the Social Contract, Part 2
  • Dissertation – finished Limitations!
  • GPT-2
    • Having installed all the transformers-related librarues, I’m testing the evolver to see if it still works. Woohoo! Onward
    • Is this good? It seems to have choked on the Torch examples, which makes sense
      D:\Development\Sandboxes\transformers>make test-examples
      python -m pytest -n auto --dist=loadfile -s -v ./examples/
      ================================================= test session starts =================================================
      platform win32 -- Python 3.7.4, pytest-5.3.2, py-1.8.0, pluggy-0.13.1 -- D:\Program Files\Python37\python.exe
      cachedir: .pytest_cache
      rootdir: D:\Development\Sandboxes\transformers
      plugins: forked-1.1.3, xdist-1.31.0
      [gw0] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw1] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw2] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw3] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw4] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw5] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw6] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw7] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw0] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw1] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw2] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw3] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw4] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw5] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw6] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw7] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0] / gw4 [0] / gw5 [0] / gw6 [0] / gw7 [0]
      scheduling tests via LoadFileScheduling
      
      ======================================================= ERRORS ========================================================
      _____________________________________ ERROR collecting examples/test_examples.py ______________________________________
      ImportError while importing test module 'D:\Development\Sandboxes\transformers\examples\test_examples.py'.
      Hint: make sure your test modules/packages have valid Python names.
      Traceback:
      examples\test_examples.py:23: in 
          import run_generation
      examples\run_generation.py:25: in 
          import torch
      E   ModuleNotFoundError: No module named 'torch'
      _________________________ ERROR collecting examples/summarization/test_utils_summarization.py _________________________
      ImportError while importing test module 'D:\Development\Sandboxes\transformers\examples\summarization\test_utils_summarization.py'.
      Hint: make sure your test modules/packages have valid Python names.
      Traceback:
      examples\summarization\test_utils_summarization.py:18: in 
          import torch
      E   ModuleNotFoundError: No module named 'torch'
      ================================================== 2 errors in 1.57s ==================================================
      make: *** [test-examples] Error 1
    • Hmm. run_generation.py seems to need Torch. This sets of a whole bunch of issues. First, installing Torch from here provides a cool little tool to determine what to install: Torch
    • Note that the available version of CUDA are 9.2 and 10.0. This is a problem, because at the moment, TF only works with 10.0. Mostly because the user community hates upgrading driversTFCuda
    • That being said, it may be true that the release candidate TF is using CUDA 10.1: TFCuda10.1
    • I think I’m going to wait until Aaron shows up to decide if I want to jump down this rabbit hole. In the meantime, I’m going to look at other TF implementations of the GPT-2. Also, the  actual use of Torch seems pretty minor, so maybe it’s avoidable?
      • It appears to be just this method
        def set_seed(args):
            np.random.seed(args.seed)
            torch.manual_seed(args.seed)
            if args.n_gpu > 0:
                torch.cuda.manual_seed_all(args.seed)
      • And the code that calls it
            args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
            args.n_gpu = torch.cuda.device_count()
        
            set_seed(args)
    • Aaron suggest using a previous version of torch that is compatible with CUDA 10.0. All the previous versions are here, and this is the line that should work (huggingface transformers’ ” repo is tested on Python 3.5+, PyTorch 1.0.0+ and TensorFlow 2.0.0-rc1“):
      pip install torch==1.2.0 torchvision==0.4.0 -f https://download.pytorch.org/whl/torch_stable.html

Phil 12.26.19

ASRC PhD 7:00 – 4:00

  • Dissertation
    • Limitations
  • GPT-2 agents setup – set up the project, but in the process of getting the huggingface transformers, I wound up setting up that project as well
    • Following directions for
      • pip install transformers
      • git clone https://github.com/huggingface/transformers
        • cd transformers
        • pip install .
      • pip install -e .[testing]
        • make test – oops. My GNU Make wasn’t on the path – fixed it
        • running tests
          • Some passed, some failed. Errors like: tests/test_modeling_tf_t5.py::TFT5ModelTest::test_compile_tf_model Fatal Python error: Aborted
          • Sure is keeping the processor busy… Like bringing the machine to its knees busy….
          • Finished – 14 failed, 10 passed, 196 skipped, 20 warnings in 1925.12s (0:32:05)
  • Fixed the coffee maker
  • Dealt with stupid credit card nonsense

Phil 12.11.19

7:00 – 5:30 ASRC GOES

  • Call dentist – done!
  • Dissertation – finished designing for populations. Ethics are next

     

  • Evolver
    • Looking at Keras-Tuner (github) to compare Evolver against
    • Installing. Wow. Big. 355MB?
    • Installed the new optevolver whl. No more timeseriesml2 for tuning! Fixed many broken links in code that used timeseriesml2
    • Tried getting the keras-tuner package installed, but it seems to make the gpu invisible? Anyway, it broke everything and after figuring out that “cpu:0” worked just fine but “gpu:0” didn’t (which required setting up some quick code to prove all that), I cleaned out all the tf packages (tensorglow-gpu, tensorboard, and keras-tuner), and reinstalled tensorflow-gpu. Everything is humming happily again, but I need a less destructive Bayesian system.
    • Maybe this? An Introductory Example of Bayesian Optimization in Python with Hyperopt A hands-on example for learning the foundations of a powerful optimization framework
  • Meetings at Mission
    • Erik was stuck at a luncheon for the first meeting
    • Some new commits from Vadim, but he couldn’t make the meeting
    • Discussion about the Artificial Intelligence and Machine Learning, Technology Summit in April, and the AI Tech Connect Spring. Both are very aligned with industry (like AI + 3D Printing), which is not my thing, so I passed. I did suggest that IEEE ICTAI 2020 might be a good fit. Need to send info to John.
    • Still need to get started on the schedule for version 2 development. Include conferences and prep, and minimal assistance.

Phil 11.15.19

7:00 – 4:00 ASRC GOES

  • Morning Meeting with Wayne
    • Quotes need page numbers
    • Found out more about why Victor’s defense was postponed. Became nervous as a result
  • Dissertation – starting the discussion section
    • I’m thinking about objective functions and how individual and group objectives work together, particularly in extreme conditions.
    • In extreme situations, the number of options available to an agent or group is diminished. There may be only one move apparently available in a chess game. A race car at the limits of adhesion has only one path through a turn. A boxer has a tiny window to land a blow. As the floodwaters rise, the range of options diminish. In a tsunami, there is only one option – run.
    • Here’s a section from article 2 of the US Military Code of Conduct (from here):
      • Surrender is the willful act of members of the Armed Forces turning themselves over to enemy forces when not required by utmost necessity or extremity. Surrender is always dishonorable and never allowed. When there is no chance for meaningful resistance, evasion is impossible, and further fighting would lead to their death with no significant loss to the enemy, members of Armed Forces should view themselves as “captured” against their will versus a circumstance that is seen as voluntarily “surrendering.”
    • If a machine is trained for combat, will it have learned the concept of surrender? According to the USCoC, no, surrender is never allowed. A machine trained to “win”, like Google’s Alpha Go, do not learn to resign. That part has to be explicitly coded in (from Wired):
      • According to David Silver, another DeepMind researcher who led the creation of AlphaGo, the machine will resign not when it has zero chance of winning, but when its chance of winning dips below 20 percent. “We feel that this is more respectful to the way humans play the game,” Silver told me earlier in the week. “It would be disrespectful to continue playing in a position which is clearly so close to loss that it’s almost over.”
    • Human organizations, like armys and companies are a kind of superhuman intelligence, made up of human parts with their own objective functions. In the case of a company, that objective is often to maximise shareholder value (NYTimes by Milton Friedman):
      • But the doctrine of “social responsibility” taken seriously would extend the scope of the political mechanism to every human activity. It does not differ in philosophy from the most explicitly collectivist doctrine. It differs only by professing to believe that collectivist ends can be attained without collectivist means. That is why, in my book “Capitalism and Freedom,” I have called it a “fundamentally subversive doctrine” in a free society, and have said that in such a society, “there is one and only one social responsibility of business – to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception fraud.”
    • When any kind of population focuses singly on a particular goal, it creates shared social reality. The group aligns with the goal and pursues it. In the absence of the awareness of the environmental effects of this orientation, it is possible to stampede off a cliff, or shape the environment so that others deal with the consequences of this goal.
    • It is doubtful that many people deliberately choose to be obese. However, markets and the profit motive have resulted in a series of innovations, ranging from agriculture to aisles of high-fructose corn syrup-based drinks at the local supermarket. The logistics chain that can create and sell a 12oz can of brand-name soda for about 35 cents is a modern miracle, optimized to maximize income for every link in the chain. But in this case, the costs of competition have created an infinite supply of heavily marketed empty calories. Even though we are aware at some level that we should rarely – if ever – have one of these beverages, they are consumed by the billions
    • The supply chain for soda is a form of superintelligence, driven by a simple objective function. It is resilient and adaptive, capable of dealing with droughts, wars, and changing fashion. It is also contributing to the deaths of approximately 300,000 Americans annually.
    • How is this like combat? Reflexive vs. reflective. Low-diversity thinking are a short-term benefit for many organizations, they enable first-mover advantage, which can serve to crowd out more diverse (more expensive) thinking. More here…

Phil 11.14.19

7:00 – 3:30 ASRC GOES

  • Dissertation – Done with Human Study!
  • Evolver
      • Work on parameter passing and function storing
      • You can use the * operator before an iterable to expand it within the function call. For example:
        timeseries_list = [timeseries1 timeseries2 ...]
        r = scikits.timeseries.lib.reportlib.Report(*timeseries_list)
      • Here’s the running code with variable arguments
        def plus_func(v1:float, v2:float) -> float:
            return v1 + v2
        
        def minus_func(v1:float, v2:float) -> float:
            return v1 - v2
        
        def mult_func(v1:float, v2:float) -> float:
            return v1 * v2
        
        def div_func(v1:float, v2:float) -> float:
            return v1 / v2
        
        if __name__ == '__main__':
            func_array = [plus_func, minus_func, mult_func, div_func]
        
            vf = EvolveAxis("func", ValueAxisType.FUNCTION, range_array=func_array)
            v1 = EvolveAxis("X", ValueAxisType.FLOAT, parent=vf, min=-5, max=5, step=0.25)
            v2 = EvolveAxis("Y", ValueAxisType.FLOAT, parent=vf, min=-5, max=5, step=0.25)
        
            for f in func_array:
                result = vf.get_random_val()
                print("------------\nresult = {}\n{}".format(result, vf.to_string()))
      • And here’s the output
        ------------
        result = -1.0
        func: cur_value = div_func
        	X: cur_value = -1.75
        	Y: cur_value = 1.75
        ------------
        result = -2.75
        func: cur_value = plus_func
        	X: cur_value = -0.25
        	Y: cur_value = -2.5
        ------------
        result = 3.375
        func: cur_value = mult_func
        	X: cur_value = -0.75
        	Y: cur_value = -4.5
        ------------
        result = -5.0
        func: cur_value = div_func
        	X: cur_value = -3.75
        	Y: cur_value = 0.75
      • Now I need to get this to work with different functions with different arg lists. I think I can do this with an EvolveAxis containing a list of EvolveAxis with functions. Done, I think. Here’s what the calling code looks like:
        # create a set of functions that all take two arguments
        func_array = [plus_func, minus_func, mult_func, div_func]
        vf = EvolveAxis("func", ValueAxisType.FUNCTION, range_array=func_array)
        v1 = EvolveAxis("X", ValueAxisType.FLOAT, parent=vf, min=-5, max=5, step=0.25)
        v2 = EvolveAxis("Y", ValueAxisType.FLOAT, parent=vf, min=-5, max=5, step=0.25)
        
        # create a single function that takes no arguments
        vp = EvolveAxis("random", ValueAxisType.FUNCTION, range_array=[random.random])
        
        # create a set of Axis from the previous function evolve args
        axis_list = [vf, vp]
        vv = EvolveAxis("meta", ValueAxisType.VALUEAXIS, range_array=axis_list)
        
        # run four times
        for i in range(4):
            result = vv.get_random_val()
            print("------------\nresult = {}\n{}".format(result, vv.to_string()))
      • Here’s the output. The random function has all the decimal places:
        ------------
        result = 0.03223958125899473
        meta: cur_value = 0.8840652389671935
        ------------
        result = -0.75
        meta: cur_value = -0.75
        ------------
        result = -3.5
        meta: cur_value = -3.5
        ------------
        result = 0.7762888191296017
        meta: cur_value = 0.13200324934487906
      • Verified that everything still works with the EvolutionaryOptimizer. Now I need to make sure that the new mutations include these new dimensions

     

  • I think I should also move TF2OptimizationTestBase to TimeSeriesML2?
  • Starting Human Compatible

Phil 10.5.19

“Everything that we see is a shadow cast by that which we do not see.” – Dr. King

misinfo

Transformer

ASRC GOES 7:00 – 4:30

  • Dissertation – more human study. Pretty smooth progress right now!
  • Cleaning up the sim code for tomorrow – done. All the prediction and manipulation to change the position data for the RWs and the vehicle are done in the inference section, while the updates to the drawing nodes are separated.
  • I think this is the code to generate GPT-2 Agents?: github.com/huggingface/transformers/blob/master/examples/run_generation.py

Phil 11.4.19

7:00 – 9:00 ASRC GOES

  • Cool thing: Our World in Data
    • The goal of our work is to make the knowledge on the big problems accessible and understandable. As we say on our homepage, Our World in Data is about Research and data to make progress against the world’s largest problems.
  • Dissertation – more human study
  • This is super-cool: The Future News Pilot Fund: Call for ideas
    • Between February and June 2020 we will fund and support a community of changemakers to test their promising ideas, technologies and models for public interest news, so communities in England have access to reliable and accurate news about the issues that matter most to them.
  • October status report
  • Sim + ML next steps:
    • I can’t do ensemble realtime inference because I’d need a gpu for each model. This means that I need to get the best “best” model and use that
    • Run the evolver to see if something better can be found
    • Add “flywheel mass” and “vehicle mass” to dictionary and get rid of the 0.05 value – done
    • Set up a second model that uses the inferred efficiency to move in accordance with the actual commands. Have them sit on either side of the origin
      • Graphics are done
      • Need to make second control system and ‘sim’ that uses inferred efficiency. Didn’t have to do all that. What I’m really doing is calculating rw angles based on the voltage and inferred efficiency. I can take the commands from the control system for the ‘actual’ satellite.

SimAndInferred

  • ML seminar
    • Showed the sim, which runs on the laptop. Then everyone’s status reports
  • Meeting with Aaron
    • Really good discussion. I think I have a handle on the paper/chapter. Added it to the ethical considerations section

Phil 11.1.19

7:00 – 3:00 ASRC GOES

KerasTuner

  • Hugging Face: State-of-the-Art Natural Language Processing in ten lines of TensorFlow 2.0
    • Hugging Face is an NLP-focused startup with a large open-source community, in particular around the Transformers library. 🤗/Transformers is a python-based library that exposes an API to use many well-known transformer architectures, such as BERTRoBERTaGPT-2 or DistilBERT, that obtain state-of-the-art results on a variety of NLP tasks like text classification, information extraction, question answering, and text generation. Those architectures come pre-trained with several sets of weights. 
  • Dissertation
    • Starting on Human Study section!
    • For once there was something there that I could work with pretty directly. Fleshing out the opening
  • OODA paper:
    • Maximin (Cass Sunstein)
      • For regulation, some people argue in favor of the maximin rule, by which public officials seek to eliminate the worst worst-cases. The maximin rule has not played a formal role in regulatory policy in the Unites States, but in the context of climate change or new and emerging technologies, regulators who are unable to conduct standard cost-benefit analysis might be drawn to it. In general, the maximin rule is a terrible idea for regulatory policy, because it is likely to reduce rather than to increase well-being. But under four imaginable conditions, that rule is attractive.
        1. The worst-cases are very bad, and not improbable, so that it may make sense to eliminate them under conventional cost-benefit analysis.
        2. The worst-case outcomes are highly improbable, but they are so bad that even in terms of expected value, it may make sense to eliminate them under conventional cost-benefit analysis.
        3. The probability distributions may include “fat tails,” in which very bad outcomes are more probable than merely bad outcomes; it may make sense to eliminate those outcomes for that reason.
        4. In circumstances of Knightian uncertainty, where observers (including regulators) cannot assign probabilities to imaginable outcomes, the maximin rule may make sense. (It may be possible to combine (3) and (4).) With respect to (3) and (4), the challenges arise when eliminating dangers also threatens to impose very high costs or to eliminate very large gains. There are also reasons to be cautious about imposing regulation when technology offers the promise of “moonshots,” or “miracles,” offering a low probability or an uncertain probability of extraordinarily high payoffs. Miracles may present a mirror-image of worst-case scenarios.
  • Reaction wheel efficiency inference
    • Since I have this spiffy accurate model, I think I’m going to try using it before spending a lot of time evolving an ensemble
    • Realized that I only trained it with a voltage of +1, so I’ll need to abs(delta)
    • It’s working!

WorkingInference

  • Next steps:
    • I can’t do ensemble realtime inference because I’d need a gpu for each model. This means that I need to get the best “best” model and use that
    • Run the evolver to see if something better can be found
    • Add “flywheel mass” and “vehicle mass” to dictionary and get rid of the 0.05 value
    • Set up a second model that uses the inferred efficiency to move in accordance with the actual commands. Have them sit on either side of the origin
  • Committed everything. I think I’m done for the day

Phil 10.30.19

7:00 – 5:00 GOES

starbird

  • Dissertation – finish up the maps chapter – done!
  • Try writing up more expensive information thoughts (added to discussion section as well)
    • Game theory comes from an age of incomplete information. Now we have access to mostly complete, but potentially expensive information
      • Expense in time – throwing the breakers on high-frequency trading
      • Expense in $$ – Buying the information you need from available resources
      • Expensive in resources – developing the hardware and software to obtain the information (Operation Hummingbird to TPU/DNN development)
    • By handing the information management to machines, we create a human-machine social structure, governed by the rules of dense/sparse,stiff/slack networks
      • AI combat is a very good example of an extremely stiff network (varies in density) and the associated time expense. Combat has to happen as fast as possible, due to OODA loop constraints. But if the system does not have designed-in capacity to negotiate a ceasefire (on both/all sides!), there may be no way to introduce it in human time scales, even though the information that one side is losing is readily apparent.
      • Online advertising is a case where existing information is hidden from the target of the advertiser, but available to the platform, and to a lesser degree, the client. Because this information asymmetry, the user’s behavior/beliefs are more likely to be exploited in a way that denies the user agency, while granting maximum agency to the platform and clients.
      • Deepfakes, spam and the costs of identifying deliberate misinformation
      • Call to action: the creation of an information environment impact body that can examine these issues and determine costs. This is too complex a process for the creators to do on their own, and there would be rampant conflict of interest anyway. But an EPA-like structure, where experts in this topic perform as a counterbalance to unconstrained development and exploitation of the information ecosystem
  • The Knowledge, Analytics, Cognitive and Cloud Computing (KnACC) lab in the Information Systems department in UMBC aims to address challenging issues at the intersection of Data Science and Cloud Computing. We are located in ITE 415.
  • GOES
    • Start creating NN that takes pitch/roll/yaw star tracker deltas and tries to calculate reaction wheel efficiency
      • input vector is dp, dr, dy. Assume a fixed timestep
      • output vector is effp, effr, effy
      • once everything trains up, try running the inferencer on the running sim and display “inferred RW efficiency” for each RW
      • Broke out the base class parts of TF2OptimizerTest. I just need to generate the test/train data for now, no sim needed

Twitter

big ending news for the day

Phil 10.25.19

7:00 – 4:00 ASRC GOES