Category Archives: Keras

Phil 4.24.20

It is very wet today

radar

Spent far too much time trying to upload a picture to the graduation site. It appears to be broken

D20

  • Changed the CONTROLLED days to < 2, since things are generally looking better

ACSOS

  • Sent the revised draft to Antonio

GPT-2 Agents

  • Found what appears to be just what I’m looking for. Searching on GitHub for GPT-2 tensorflow led me to this project, GPT-2 Client. I’ll give that a try and see how it works. The developer, Rishabh Anand seems to have solid skills so I have some hope that this could work. I do  not have the energy to start this on a Friday and then switch to GANs for the rest of the day. Sunday looks like another wet one, so maybe then.

GOES

block_3_conv2More looking at layers. This is Imagenet’s block3_conv3

  • Advanced CNNs
  • Start GANS? Yes!
    • Got this version working. Now I need to step through it. But here are some plots of it learning:
    • I had dreams about this, so I’m going to record the thinking here:
      • An MLP should be able to get from a simple simulation (square wave) to a more accurate(?) simulation sin wave. The data set is various start points and frequency queries into the DB, with matching (“real”/noisy) as the test. My intuition is that the noise will be lost, so that’s the part we’re going to have to get back with the GAN.
      • So I think there is a two-step process
        • Train the initial NN that will produce the generalized solution
        • Use the output of the NN and the “real” data to train the GAN for fine tuning

Phil 4.23.20

Transformer Architecture: The Positional Encoding

  • In this article, I don’t plan to explain its architecture in depth as there are currently several great tutorials on this topic (herehere, and here), but alternatively, I want to discuss one specific part of the transformer’s architecture – the positional encoding.

D20

  • Add centroids for states – done
  • Return the number of neighbors as an argument – done
  • Chatted with Aaron and Zach. More desire to continue than abandon

ACSOS

  • More revisions. Swap steps for discussion and future work

GOES

    • IRS proposal went in yesterday
    • Continue with GANs
    • Using the VGG model now with much better results. Also figured out how to loads weights and read the probabilities in the output layer: vgg
    • Same thing using the pre-trained model from Keras:
      from tensorflow.keras.applications.vgg16 import VGG16
      # prebuild model with pre-trained weights on imagenet
      model = VGG16(weights='imagenet', include_top=True)
      model.compile(optimizer='sgd', loss='categorical_crossentropy')

      vggPretrained

    • Trying to visualize a layer using this code. And using that code as a starting point, I had to explore how to slice up the tensors in the right way. A CNN layer has a set of “filters” that contain a square set of pixels. The data is stored as an array of pixels at each x, y, coordinate, so I had to figure out how to get one image at a time. Here’s my toy:
      import numpy as np
      import matplotlib.pyplot as plt
      
      n_rows = 4
      n_cols = 8
      depth = 4
      
      my_list = []
      
      for r in range(1, n_rows):
          row = []
          my_list.append(row)
          for c in range(1, n_cols):
              cell = []
              row.append(cell)
              for d in range(depth):
                  cell.append(d+c*10+r*100)
      
      print(my_list)
      nl = np.array(my_list)
      for d in range(depth):
          print("\nlayer {} = \n{}".format(d, nl[:, :, d]))
          plt.figure(d)
          plt.imshow(nl[:, :, d], aspect='auto', cmap='plasma')
      
      plt.show()
    • This gets features from a cat image at one of the pooling layers. The color map is completely arbitrary:
      # get the features from this block
      features = model.predict(x)
      print(features.shape)
      farray = np.array(features[0])
      print("{}".format(farray[:, :, 0]))
      
      for d in range(4):
         plt.figure(d)
         plt.imshow(farray[:, :, d], aspect='auto', cmap='plasma')
    • But we get some cool pix!

Phil 4.22.20

  • Amsterdam, 24 April 2020​
  • This workshop aims to bring together researchers and practitioners from the emerging fields of Graph Representation Learning and Geometric Deep Learning. The workshop will feature invited talks and a poster session. There will be ample opportunity for discussion and networking.​
  • Invited talks will be live-streamed on YouTube: https://www.youtube.com/watch?v=Zf_nLR4kMo4
  • Looking for an online seminar that presents the latest advances in reinforcement learning theory? You just found it! We aim to bring you a virtual seminar (approximately) every Tuesday at 5pm UTC featuring the latest work in theoretical reinforcement learning.

D20

  • Added P-threshold to json file. I’m concerned that everyone is too busy to participate any more. Aaron hasn’t even asked about the project since he got better and is complaining about how overworked he is. Zach seems to be equally busy. If no one steps up by the end of the week, I think it’s time to either take over the project entirely or shut it down.

ACSOS

  • Started working on Antonio’s changes
  • Changed the MappApp so that the trajectory lines are blue

GOES

  • Finish CNN chapter
  • Enable Tensorflow profiling
    • Installed the plugin: pip install tensorboard_plugin_profile
    • Updated setup_tensorboard():
      def setup_tensorboard(dir_str: str, windows_slashes:bool = True) -> List:
          if windows_slashes:
              dir_str = dir_str.replace("/", "\\")
          try:
              shutil.rmtree(dir_str)
          except:
              print("no file {} at {}".format(dir_str, os.getcwd()))
      
          # use TensorBoard, princess Aurora!
          callbacks = [tf.keras.callbacks.TensorBoard(log_dir=dir_str, profile_batch = '500,510')]
          return callbacks
  • Huh. Looks like scipy.misc.imresize() and scipy.misc.imread() are both deprecated and out of the library. Trying opencv
    • pip install opencv-python
    • Here’s how I did it, with some debugging to varify that everything was working correctly thrown in:
      img_names = ['cat.jpg', 'steam-locomotive.jpg']
      img_list = []
      for name in img_names:
          img = cv2.imread(name)
          res = np.array(cv2.resize(img, dsize=(32, 32), interpolation=cv2.INTER_CUBIC))
          cv2.imwrite(name.replace(".jpg","_32x32.jpg"), res)
          img_list.append(res)
      
      imgs = np.transpose(img_list, (0, 2, 1, 3))
      imgs = np.array(img_list) / 255
  • This forced me to go down a transpose() in multiple dimensions rabbit hole that’s worth documenting. First, here’s code that takes some tiny images in an array and transposes them:
    import numpy as np
    
    img_list = [
        # image 1
        [[[10, 20, 30],
          [11, 21, 31],
          [12, 22, 32],
          [13, 23, 33]],
    
         [[255, 255, 255],
          [48, 45, 58],
          [101, 150, 205],
          [255, 255, 255]],
    
         [[255, 255, 255],
          [43, 56, 75],
          [77, 110, 157],
          [255, 255, 255]],
    
         [[255, 255, 255],
          [236, 236, 238],
          [76, 104, 139],
          [255, 255, 255]]],
        # image 2
        [[[100, 200, 300],
          [101, 201, 301],
          [102, 202, 302],
          [103, 203, 303]],
    
         [[159, 146, 145],
          [89, 74, 76],
          [207, 207, 210],
          [212, 203, 203]],
    
         [[145, 155, 164],
          [52, 40, 36],
          [166, 160, 163],
          [136, 132, 134]],
    
         [[61, 56, 60],
          [36, 32, 35],
          [202, 195, 195],
          [172, 165, 177]]]]
    
    np_imgs = np.array(img_list)
    print("np_imgs shape = {}".format(np_imgs.shape))
    
    imgs = np.transpose(img_list, (0, 2, 1, 3))
    print("imgs shape = {}".format(np_imgs.shape))
    #imgs = np.array(imgs) / 255
    
    print("pix 0: \n{}".format(np_imgs[0]))
    print("transposed pix 0: \n{}".format(imgs[0]))
    print("\n------------------------\n")
    print("pix 1: \n{}".format(np_imgs[1]))
    print("transposed pix 1: \n{}".format(imgs[1]))
  • So, this is a complex matrix, with a shape of (2, 4, 4, 3). What we want to do is rotate the images (the inner 4, 4) by 90 degrees by transposing them. The way to understand Numpy’s transpose is that it interchanges two axis. The trick is understanding how.
  • For this matrix, applying a transpose that does nothing means writing this:
    imgs = np.transpose(img_list, (0, 1, 2, 3))
  • Think of it as an identity transpose. What we want to do is reverse the order of the inner 4, 4, which we do like this:
    imgs = np.transpose(img_list, (0, 2, 1, 3))
  • That’s it! Now the second “4” will be transposed with the first “4”. You can do this with any of the elements. So
    imgs = np.transpose(img_list, (3, 2, 1, 0))
  • Reverses everything!
  • Ok, so things are working, but the results are crap. Not really worrying about it for now because it’s CFAR and I always have this problem:
    ./images\airplane.jpg = [8] ship
    ./images\automobile.jpg = [0] airplane 
    ./images\bird.jpg = [4] deer
    ./images\cat.jpg = [0] airplane 
    ./images\cat2.jpg = [6] frog
    ./images\cat3.jpg = [8] ship
    ./images\deer.jpg = [8] ship
    ./images\dog.jpg = [2] bird
    ./images\horse.jpg = [8] ship
    ./images\ship.jpg = [0] airplane 
    ./images\steam-locomotive.jpg = [2] bird
    ./images\truck.jpg = [3] cat
    [8 0 4 0 6 8 8 2 8 0 2 3]

     

  • Meeting

Phil 1.17.20

An ant colony has memories that its individual members don’t have

  • Like a brain, an ant colony operates without central control. Each is a set of interacting individuals, either neurons or ants, using simple chemical interactions that in the aggregate generate their behaviour. People use their brains to remember. Can ant colonies do that? 

7:00 – ASRC

  •  Dissertation
    • More edits
    • Changed all the overviews so that they also reference the section by name. It reads better now, I think
    • Meeting with Thom
  • GPT-2 Agents
  • GSAW Slide deck

Phil 1.15.20

I got invited to the TF Dev conference!

The HKS Misinformation Review is a new format of peer-reviewed, scholarly publication. Content is produced and “fast-reviewed” by misinformation scientists and scholars, released under open access, and geared towards emphasizing real-world implications. All content is targeted towards a specialized audience of researchers, journalists, fact-checkers, educators, policy makers, and other practitioners working in the information, media, and platform landscape.

  • For the essays, a length of 1,500 to 3,000 words (excluding footnotes and methodology appendix) is appropriate, but the HKS Misinformation Review will consider and publish longer articles. Authors of articles with more than 3,000 words should consult the journal’s editors before submission.

7:00 – ASRC GOES

  •  Dissertation
    • It looks like I fixed my LaTeX problems. I went to C:\Users\phil\AppData\Roaming\MiKTeX\2.9\tex\latex, and deleted the ifvtex folder. Re-ran, things installed, and all is better now
    • Slides
  • GOES
    • Pinged Isaac about the idea of creating scenarios that incorporate the NASA simulators
    • Meeting
  • GSAW
    • Slides
    • Speakers presenting in a plenary session are scheduled to speak for 15 minutes, with five additional minutes allowed for questions and answers from the audience
    • Our microphones work best when the antenna unit is clipped to a belt and the microphone is attached near the center of your chest.
    • We are NOT providing network capabilities such as WiFi. If you require WiFi, you are responsible for purchasing it from the hotel and ensuring that it works for the presentation.
    • Charts produced by the PC version of Microsoft PowerPoint 2013, 2016 or 365 are preferred
    • . In creating your slides, note that the presentation room is large and you should consider this in your selection of larger fonts, diagram size, etc. At a minimum, a 20-point font is recommended
  • GPT-2 – Maybe do something with Aaron today?

Phil 1.2.20

7:00 – 4:30 ASRC PhD

  • More highlighting and slides. Once I get through the Background section, I’ll write the overview, then repeat that patterns.
    • I’m tweaking too much text to keep the markup version. Sigh.
    • Finished Background and sent that to Wayne
  • GPT-2 Agents. See if we can get multiple texts generated – nope
    • Build a corpus of .txt files
    • Try running them through LMN
  • No NOAA meeting
  • No ORCA meeting

Phil 12.31.19

CodeIt

Get tix for ET 2020

7:00 – 4:30 PhD

  • Starting slides as a way to do the chapter overviews and summaries
  • GPT-2 agents
    • Got rid of Huggingface’s transformers library. Too much hidden stuff to understand
    • Aaron found a couple of other projects on GitHub – trying those
    • Downloaded the 715M model and associated files

And I’m a guest editor!IEEE

Phil 1.30.19

7:00 – 7:00 ASRC PhD

ClimateTree

  • Nice visualization, with map-like aspects: The Climate Learning Tree
  •  Dissertation
    • Start JuryRoom section – done!
    • Finished all content!
  • GPT-2 Agents
    • Download big model and try to run it
    • Move models and code out of the transformers project
  • GOES
    • Learning by Cheating (sounds like a mechanism for simulation to work with)
      • Vision-based urban driving is hard. The autonomous system needs to learn to perceive the world and act in it. We show that this challenging learning problem can be simplified by decomposing it into two stages. We first train an agent that has access to privileged information. This privileged agent cheats by observing the ground-truth layout of the environment and the positions of all traffic participants. In the second stage, the privileged agent acts as a teacher that trains a purely vision-based sensorimotor agent. The resulting sensorimotor agent does not have access to any privileged information and does not cheat. This two-stage training procedure is counter-intuitive at first, but has a number of important advantages that we analyze and empirically demonstrate. We use the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art on the CARLA benchmark and the recent NoCrash benchmark. Our approach achieves, for the first time, 100% success rate on all tasks in the original CARLA benchmark, sets a new record on the NoCrash benchmark, and reduces the frequency of infractions by an order of magnitude compared to the prior state of the art. For the video that summarizes this work, see this https URL
  • Meeting with Aaron
    • Overview at the beginning of each chapter – look at Aaron’s chapter 5 for
    • example intro and summary.
    • Callouts in text should match the label
    • hfill to right-justify
    • Footnote goes after puntuation
    • Punctuation goes inside quotes
    • for url monospace use \texttt{} (perma.cc)
    • indent blockquotes 1/2 more tab
    • Non breaking spaces on names
    • Increase figure sizes in intro

Phil 12.27.19

ASRC PhD 7:00 –

  • The difference between “more” (low dimension stampede-ish), and “enough” (grounded and comparative) – from Rebuilding the Social Contract, Part 2
  • Dissertation – finished Limitations!
  • GPT-2
    • Having installed all the transformers-related librarues, I’m testing the evolver to see if it still works. Woohoo! Onward
    • Is this good? It seems to have choked on the Torch examples, which makes sense
      D:\Development\Sandboxes\transformers>make test-examples
      python -m pytest -n auto --dist=loadfile -s -v ./examples/
      ================================================= test session starts =================================================
      platform win32 -- Python 3.7.4, pytest-5.3.2, py-1.8.0, pluggy-0.13.1 -- D:\Program Files\Python37\python.exe
      cachedir: .pytest_cache
      rootdir: D:\Development\Sandboxes\transformers
      plugins: forked-1.1.3, xdist-1.31.0
      [gw0] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw1] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw2] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw3] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw4] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw5] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw6] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw7] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw0] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw1] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw2] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw3] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw4] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw5] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw6] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw7] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0] / gw4 [0] / gw5 [0] / gw6 [0] / gw7 [0]
      scheduling tests via LoadFileScheduling
      
      ======================================================= ERRORS ========================================================
      _____________________________________ ERROR collecting examples/test_examples.py ______________________________________
      ImportError while importing test module 'D:\Development\Sandboxes\transformers\examples\test_examples.py'.
      Hint: make sure your test modules/packages have valid Python names.
      Traceback:
      examples\test_examples.py:23: in 
          import run_generation
      examples\run_generation.py:25: in 
          import torch
      E   ModuleNotFoundError: No module named 'torch'
      _________________________ ERROR collecting examples/summarization/test_utils_summarization.py _________________________
      ImportError while importing test module 'D:\Development\Sandboxes\transformers\examples\summarization\test_utils_summarization.py'.
      Hint: make sure your test modules/packages have valid Python names.
      Traceback:
      examples\summarization\test_utils_summarization.py:18: in 
          import torch
      E   ModuleNotFoundError: No module named 'torch'
      ================================================== 2 errors in 1.57s ==================================================
      make: *** [test-examples] Error 1
    • Hmm. run_generation.py seems to need Torch. This sets of a whole bunch of issues. First, installing Torch from here provides a cool little tool to determine what to install: Torch
    • Note that the available version of CUDA are 9.2 and 10.0. This is a problem, because at the moment, TF only works with 10.0. Mostly because the user community hates upgrading driversTFCuda
    • That being said, it may be true that the release candidate TF is using CUDA 10.1: TFCuda10.1
    • I think I’m going to wait until Aaron shows up to decide if I want to jump down this rabbit hole. In the meantime, I’m going to look at other TF implementations of the GPT-2. Also, the  actual use of Torch seems pretty minor, so maybe it’s avoidable?
      • It appears to be just this method
        def set_seed(args):
            np.random.seed(args.seed)
            torch.manual_seed(args.seed)
            if args.n_gpu > 0:
                torch.cuda.manual_seed_all(args.seed)
      • And the code that calls it
            args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
            args.n_gpu = torch.cuda.device_count()
        
            set_seed(args)
    • Aaron suggest using a previous version of torch that is compatible with CUDA 10.0. All the previous versions are here, and this is the line that should work (huggingface transformers’ ” repo is tested on Python 3.5+, PyTorch 1.0.0+ and TensorFlow 2.0.0-rc1“):
      pip install torch==1.2.0 torchvision==0.4.0 -f https://download.pytorch.org/whl/torch_stable.html

Phil 12.26.19

ASRC PhD 7:00 – 4:00

  • Dissertation
    • Limitations
  • GPT-2 agents setup – set up the project, but in the process of getting the huggingface transformers, I wound up setting up that project as well
    • Following directions for
      • pip install transformers
      • git clone https://github.com/huggingface/transformers
        • cd transformers
        • pip install .
      • pip install -e .[testing]
        • make test – oops. My GNU Make wasn’t on the path – fixed it
        • running tests
          • Some passed, some failed. Errors like: tests/test_modeling_tf_t5.py::TFT5ModelTest::test_compile_tf_model Fatal Python error: Aborted
          • Sure is keeping the processor busy… Like bringing the machine to its knees busy….
          • Finished – 14 failed, 10 passed, 196 skipped, 20 warnings in 1925.12s (0:32:05)
  • Fixed the coffee maker
  • Dealt with stupid credit card nonsense

Phil 12.19.19

7:00 – 4:30 ASRC GOES

  • Dissertation
    • Conclusions – got through the intro and starting the hypothesis section
  • NASA GitHub
  • Evolver
    • More documentation for sure, maybe more debugging?
    • Had to update my home system
    • Looks like the fix is working. I ran it again, and no problems
    • A little more documentation before heading down to the NSOF
  • Simulations
    • Meeting with Isaac – Lots of discussion. The question is how to handle the simulations. NOAA is used to these and has extremely high fidelity ones, but we need sims that can train on many permutations. Here’s an IEEE article on augmented reality training robocars that should be cited
      • industry must augment road testing with other strategies to bring out as many edge cases as possible. One method now in use is to test self-driving vehicles in closed test facilities where known edge cases can be staged again and again.
      • Computer simulation provides a way around the limitations of physical testing. Algorithms generate virtual vehicles and then move them around on a digital map that corresponds to a real-world road. If the data thus generated is then broadcast to an actual vehicle driving itself on the same road, the vehicle will interpret the data exactly as if it had come from its own sensors. Think of it as augmented reality tuned for use by a robot.
  • NSOF Meeting
    • UI demonstrations
    • Got my card activated!

Phil 12.11.19

7:00 – 5:30 ASRC GOES

  • Call dentist – done!
  • Dissertation – finished designing for populations. Ethics are next

     

  • Evolver
    • Looking at Keras-Tuner (github) to compare Evolver against
    • Installing. Wow. Big. 355MB?
    • Installed the new optevolver whl. No more timeseriesml2 for tuning! Fixed many broken links in code that used timeseriesml2
    • Tried getting the keras-tuner package installed, but it seems to make the gpu invisible? Anyway, it broke everything and after figuring out that “cpu:0” worked just fine but “gpu:0” didn’t (which required setting up some quick code to prove all that), I cleaned out all the tf packages (tensorglow-gpu, tensorboard, and keras-tuner), and reinstalled tensorflow-gpu. Everything is humming happily again, but I need a less destructive Bayesian system.
    • Maybe this? An Introductory Example of Bayesian Optimization in Python with Hyperopt A hands-on example for learning the foundations of a powerful optimization framework
  • Meetings at Mission
    • Erik was stuck at a luncheon for the first meeting
    • Some new commits from Vadim, but he couldn’t make the meeting
    • Discussion about the Artificial Intelligence and Machine Learning, Technology Summit in April, and the AI Tech Connect Spring. Both are very aligned with industry (like AI + 3D Printing), which is not my thing, so I passed. I did suggest that IEEE ICTAI 2020 might be a good fit. Need to send info to John.
    • Still need to get started on the schedule for version 2 development. Include conferences and prep, and minimal assistance.

Phil 11.18.19

7:00 – 4:00 ASRC GOES

  • Dissertation
    • Finished my notes on the introduction to History of Cartography
    • Started in on the discussion, which is a poorly organized mess
  • Evolver
    • Moving the optimization to a hyperparameter folder in TimeSeriesML2. Validating – it works!
    • Make sure that genomes don’t repeat. Making progress, but it’s complex and slow going. Right now it doesn’t repeat on the value, but I don’t think that’s quite right
    • Getting the parameters to print in the spreadsheet history. That’s mostly working, but the function cur_value isn’t working quite right. This may be affecting the evolution of the system, which hits a plateau.
  • Meeting with Aaron M. Went over the discussion debris, and worked towards getting things to behave. Need to define what a phase is, and remove occurances of social influence distance. Also discussed getting an editor. My bibfile is a mess

Phil 11.15.19

7:00 – 4:00 ASRC GOES

  • Morning Meeting with Wayne
    • Quotes need page numbers
    • Found out more about why Victor’s defense was postponed. Became nervous as a result
  • Dissertation – starting the discussion section
    • I’m thinking about objective functions and how individual and group objectives work together, particularly in extreme conditions.
    • In extreme situations, the number of options available to an agent or group is diminished. There may be only one move apparently available in a chess game. A race car at the limits of adhesion has only one path through a turn. A boxer has a tiny window to land a blow. As the floodwaters rise, the range of options diminish. In a tsunami, there is only one option – run.
    • Here’s a section from article 2 of the US Military Code of Conduct (from here):
      • Surrender is the willful act of members of the Armed Forces turning themselves over to enemy forces when not required by utmost necessity or extremity. Surrender is always dishonorable and never allowed. When there is no chance for meaningful resistance, evasion is impossible, and further fighting would lead to their death with no significant loss to the enemy, members of Armed Forces should view themselves as “captured” against their will versus a circumstance that is seen as voluntarily “surrendering.”
    • If a machine is trained for combat, will it have learned the concept of surrender? According to the USCoC, no, surrender is never allowed. A machine trained to “win”, like Google’s Alpha Go, do not learn to resign. That part has to be explicitly coded in (from Wired):
      • According to David Silver, another DeepMind researcher who led the creation of AlphaGo, the machine will resign not when it has zero chance of winning, but when its chance of winning dips below 20 percent. “We feel that this is more respectful to the way humans play the game,” Silver told me earlier in the week. “It would be disrespectful to continue playing in a position which is clearly so close to loss that it’s almost over.”
    • Human organizations, like armys and companies are a kind of superhuman intelligence, made up of human parts with their own objective functions. In the case of a company, that objective is often to maximise shareholder value (NYTimes by Milton Friedman):
      • But the doctrine of “social responsibility” taken seriously would extend the scope of the political mechanism to every human activity. It does not differ in philosophy from the most explicitly collectivist doctrine. It differs only by professing to believe that collectivist ends can be attained without collectivist means. That is why, in my book “Capitalism and Freedom,” I have called it a “fundamentally subversive doctrine” in a free society, and have said that in such a society, “there is one and only one social responsibility of business – to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception fraud.”
    • When any kind of population focuses singly on a particular goal, it creates shared social reality. The group aligns with the goal and pursues it. In the absence of the awareness of the environmental effects of this orientation, it is possible to stampede off a cliff, or shape the environment so that others deal with the consequences of this goal.
    • It is doubtful that many people deliberately choose to be obese. However, markets and the profit motive have resulted in a series of innovations, ranging from agriculture to aisles of high-fructose corn syrup-based drinks at the local supermarket. The logistics chain that can create and sell a 12oz can of brand-name soda for about 35 cents is a modern miracle, optimized to maximize income for every link in the chain. But in this case, the costs of competition have created an infinite supply of heavily marketed empty calories. Even though we are aware at some level that we should rarely – if ever – have one of these beverages, they are consumed by the billions
    • The supply chain for soda is a form of superintelligence, driven by a simple objective function. It is resilient and adaptive, capable of dealing with droughts, wars, and changing fashion. It is also contributing to the deaths of approximately 300,000 Americans annually.
    • How is this like combat? Reflexive vs. reflective. Low-diversity thinking are a short-term benefit for many organizations, they enable first-mover advantage, which can serve to crowd out more diverse (more expensive) thinking. More here…