Category Archives: Simulation

Phil 7.19.19

7:00 – 4:30 ASRC GEOS

StanfordNLP

  • Still looking at what’s wrong with my NK model. I found Random Boolean Networks, when looking for “random binary networks kauffman example“. It also has a bibliography that looks helpful as well
    • Introduction to Random Boolean Networks
      • The goal of this tutorial is to promote interest in the study of random Boolean networks (RBNs). These can be very interesting models, since one does not have to assume any functionality or particular connectivity of the networks to study their generic properties. Like this, RBNs have been used for exploring the configurations where life could emerge. The fact that RBNs are a generalization of cellular automata makes their research a very important topic. The tutorial, intended for a broad audience, presents the state of the art in RBNs, spanning over several lines of research carried out by different groups. We focus on research done within artificial life, as we cannot exhaust the abundant research done over the decades related to RBNs.
      • I can add a display that shows this: Trajectory
      • Got that working
      • Rewrote so that there is an evolve without a fitness test. Trying to set up transition patterns like this: Transitions
      • The thing is, I don’t see how the K part works here…
      • I think I got it working!
    • Complex and Adaptive Dynamical Systems: A Primer
      • An thorough introduction is given at an introductory level to the field of quantitative complex system science, with special emphasis on emergence in dynamical systems based on network topologies. Subjects treated include graph theory and small-world networks, a generic introduction to the concepts of dynamical system theory, random Boolean networks, cellular automata and self-organized criticality, the statistical modeling of Darwinian evolution, synchronization phenomena and an introduction to the theory of cognitive systems. 
        It inludes chapter on Graph Theory and Small-World Networks, Chaos, Bifurcations and Diffusion, Complexity and Information Theory, Random Boolean Networks, Cellular Automata and Self-Organized Criticality, Darwinian evolution, Hypercycles and Game Theory, Synchronization Phenomena and Elements of Cognitive System Theory.

Phil 7.18.19

7:00 – 5:00 ASRC GEOS

  • Started to fold Wayne’s comments in
  • Working on the Kauffman section
  • Tried making it so K can be higher than N with resampling and I still can’t keep the system from converging, which makes me think that there is something wrong with the code.
  • Send reviews to Antonio – done
  • Back to work on the physics model. Make sure to include a data dictionary mapping system to support Bruce’s concept
  • Sent links to Panda3D to Vadim
  • Code autocompletion using deep learning
  • A lot of flailing today but no good progress:

N_20_K_6

Phil 7.17.19

7:00 – 7:00 ASRC GEOS

  • Got some nice NK model network plots working:
  • Added a long jump mutation when plateaus are hit:
  • Generally, fixed a lot of bugs in the code, but I think I understand the NK model thing. I do want to try and find how they did the traveling salesman problem
  • AI/ML Meeting
    • NASA? Air Force(?) are putting together a reinforcement learning model for autonomous spacecraft control, that requires a simulator.
  • Meeting with Wayne
    • Lots of work on the dissertation
    • Walked through JuryRoom prototype

Phil 7.16.19

7:00 – 6:30ASRC GEOS

  • Working more on NK Models. I have the original paper – Towards a general theory of adaptive walks on rugged landscapes, and I’ve pulled out my copy of The Origins of Order
    • Determine if I have the evaluation function right
    • Add mutation
    • Draw the networks
    • Draw an N/K/Fitness landscape?
    • As an aside, I think that an NK model can be modified to use backpropagation rather than mutation. That could be interesting.
    • Ok, here’s everything working the way I think it should work, but I’m not sure it’s right….
  • Need to get back to Antonio about authorship and roles. I think that it makes sense if he can get a sense of what – done
  • Discovered the trumptwitterarchive, which is downloadable. Would like to build a network of the retweets and tagging by sentiment, gender and race.
  • Code review with Chris. Unfortunately, it was more like an interrogation than a tour. My sense is that he was expecting us to ask questions and we were expecting a presentation.
    • It went ok, but the audio connection was terrible

Phil 7,11,19

7:00 – 4:30 ASRC GEOS

  • Ping Antonio – Done
  • Dissertation
    • More Bones in a Hut. Found the online version of the Hi-Lo chapter from BIC. Explaining why Hi-Lo is different from IPD.
  • More reaction wheel modeling
  • Get flight and hotel for July 30 trip – Done
  • So this is how you should install Panda3d and examples:
    • First, make sure that you have Python 3.7 (64 bit! The default is 32 bit). Make sure that your path environment points to this, and not any other 3.x versions that you’re hoarding on your machine.
    • pip install panda3d==1.10.3
    • Then download the installer and DON’T select the python 3.7 support: Panda3d_install
    • Finish the install and verify that the demos run (e.g. python \Panda3D-1.10.3-x64\samples\asteroids\main.py): asteroids
    • That’s it!
  • Discussed DARPA solicitation HR001119S0053 with Aaron. We know the field of study – logistics and supply chain, but BD has totally dropped the ball on deadlines and setting up any kind of relationship with the points of contact. We countered that we could write a paper and present at a venue to gain access and credibility that way.
    • There is a weekly XML from the FBO. Downloading this week’s to see if it’s easy to parse and search

Phil 7.10.19

7:00 – 5:00 ASRC

  • BP&S is up! Need to ping Antonio
  • Need to fix DfS to de-emphasize the mapping part. Including things like, uh, changing the title…
  • Pix at HQ – Done
  • Greenbelt today, which means getting Panda3D up and running on my laptop – Done. Had to point the IDE at the python in the install.
  • Need to add some thoughts to JuryRoom concepts
  • Send dungeon invites for the 23rd, and ping Aaron M. Done. Wayne can’t make it! Drat!
  • Dissertation working on the Bacharach section
  • Got the sim working on the laptop. I realize that the reaction wheel can be modeled as weights on a stick. Long discussion with Bruce T

Phil 6.25.19

7:00 – 7:00 ASRC GEOS

  • Scheduled the map run for Monday, July 1, 12:30 – 4:30
  • Asked wayne for a 100-word bio by the end of the month. Working on mine today
  • What use are computational models of cognitive processes?
    • Computational modelers are not always explicit about their motivations for constructing models, nor are they always explicit about the theoretical implications of their models once constructed. Perhaps in part due to this, models have been criticized as “black-box” exercises which can play little or no role in scientific explanation. This paper argues that models are useful, and that the motivations for constructing computational models can be made clear by considering the roles that tautologies can play in the development of explanatory theories. From this, additionally, I propose that although there are diverse benefits of model building, only one class of benefits — those which relate to explanation — can provide justification for the activity.
  • DTW run looks good. It took 8 1/2 hours to run: fulldtw
  • Fixed a lot of things to get the clustering to behave, but it all looks good
  • Spend a while arguing online about ketchup vs. mustard with Aaron JuryRoom
  • Waikato meeting
    • Test with a few thousand posts using lorem ipsum
    • Maybe double the character count
    • Scroll to offscreen posts
    • Context-sensitive text
    • Toggle vote button
    • 500 default chars, variable

Phil5.23.19

7:00 – 5:00 ASRC GEOS

  • Saw 4×3000 with David and Roger last night. The VR lab seems to be a thing. Need to go down and have a chat, possibly about lists, stories, maps and games
  • Found the OECD Principles on Artificial Intelligence. I haven’t had a chance to actually *read* any of it, but I did create a “Treaty Lit” folder in the Sanhedrin folder and put pdf versions of them. I ran my tool over them and got the following probe:
    • state cyber cybercrime agree united china mechanism
  • Putting that into Google Scholar returns some good hits as well, though I haven’t gotten a chance to do anything beyond that.
  • JASSS paper
    • Changing “consensus” to “alignment”, and breaking many paragraphs. I think the setup of the space in the introduction is better now.
  • Got caught up on NESDIS. Worked some on the slide deck, which I finally got back. Scheduled a walkthrough with T tomorrow
  • GEOS AI/ML meeting at NSOF. Still trying to figure out roles and responsibilities. I think the Sanhedrin concept will help Bruce formalize our contributions.

Phil 4.26.19

7:00 – 4:00 ASRC TL

Phil 10.31.18

7:00 – ASRC PhD

  • Read this carefully today: Introducing AdaNet: Fast and Flexible AutoML with Learning Guarantees
    • Today, we’re excited to share AdaNet, a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet builds on our recent reinforcement learning and evolutionary-based AutoML efforts to be fast and flexible while providing learning guarantees. Importantly, AdaNet provides a general framework for not only learning a neural network architecture, but also for learning to ensemble to obtain even better models.
    • What about data from simulation?
    • Github repo
    • This looks like it’s based deeply the cloud AI and Machine Learning products, including cloud-based hyperparameter tuning.
    • Time series prediction is here as well, though treated in a more BigQuery manner
      • In this blog post we show how to build a forecast-generating model using TensorFlow’s DNNRegressor class. The objective of the model is the following: Given FX rates in the last 10 minutes, predict FX rate one minute later.
    • Text generation:
      • Cloud poetry: training and hyperparameter tuning custom text models on Cloud ML Engine
        • Let’s say we want to train a machine learning model to complete poems. Given one line of verse, the model should generate the next line. This is a hard problem—poetry is a sophisticated form of composition and wordplay. It seems harder than translation because there is no one-to-one relationship between the input (first line of a poem) and the output (the second line of the poem). It is somewhat similar to a model that provides answers to questions, except that we’re asking the model to be a lot more creative.
      • Codelab: Google Developers Codelabs provide a guided, tutorial, hands-on coding experience. Most codelabs will step you through the process of building a small application, or adding a new feature to an existing application. They cover a wide range of topics such as Android Wear, Google Compute Engine, Project Tango, and Google APIs on iOS.
        Codelab tools on GitHub

  • Add the Range and Length section in my notes to the DARPA measurement section. Done. I need to start putting together the dissertation using these parts
  • Read Open Source, Open Science, and the Replication Crisis in HCI. Broadly, it seems true, but trying to piggyback on GitHub seems like a shallow solution that repurposes something for coding – an ephemeral activity, to science, which is archival for a reason. Thought needs to be given to an integrated (collection, raw data, cleaned data, analysis, raw results, paper (with reviews?), slides, and possibly a recording of the talk with questions. What would it take to make this work across all science, from critical ethnographies to particle physics? How will it be accessible in 100 years? 500? 1,000? This is very much an HCI problem. It is about designing a useful socio-cultural interface. Some really good questions would be “how do we use our HCI tools to solve this problem?”, and, “does this point out the need for new/different tools?”.
  • NASA AIMS meeting. Demo in 2 weeks. AIMS is “time series prediction”, A2P is “unstructured data”. Proove that we can actually do ML, as opposed to saying things.
    • How about cross-point correlation? Could show in a sim?
    • Meeting on Friday with a package
    • We’ve solved A, here’s the vision for B – Z and a roadmap. JPSS is a near-term customer (JPSS Data)
    • Getting actionable intelligence from the system logs
    • Application portfolios for machine learning
    • Umbrella of capabilities for Rich Burns
    • New architectural framework for TTNC
    • Complete situational awareness. Access to commands and sensor streams
    • Software Engineering Division/Code 580
    • A2P as a toolbox, but needs to have NASA-relevant analytic capabilities
    • GMSEC overview

Phil 10.30.18

7:00 – 3:30 ASRC PhD

  • Search as embodies in the “Ten Blue Links” meets the requirements of a Parrow “Normal Accident”
    • The search results are densely connected. That’s how PageRank works. Even latent connections matter.
    • The change in popularity of a page rapidly affects the rank. So the connections are stiff
    • The relationships of the returned links both to each other and to the broader information landscape in general is hidden.
    • An additional density and stiffness issue is that everyone uses Google, so there is a dense, stiff connection between the search engine and the population of users
  • Write up something about how
    • ML can make maps, which decrease the likelihood of IR contributing to normal accidents
    • AI can use these maps to understand the shape of human belief space, and where the positive regions and dangerous sinks are.
  • Two measures for maps are the concepts or Range and length. Range is the distance that a trajectory can be placed on the map and remain contiguous. Length is the total distance that a trajectory travels, independent of the map its placed on.
  • Write up the basic algorithm of ML to map production
    • Take a set of trajectories that are known to be in the same belief region (why JuryRoom is needed) as the input
    • Generate an N-dimensional coordinate frame that best preserves length over the greatest range.
    • What is used as the basis for the trajectory may matter. The range (at a minimum), can go from letters to high-level topics. I think any map reconstruction based on letters would be a tangle, with clumps around TH, ER, ON, and AN. At the other end, an all-encompassing meta-topic, like WORDS would be a single, accurate, but useless single point. So the map reconstruction will become possible somewhere between these two extremes.
  • The Nietzsche text is pretty good. In particular, check out the way the sentences form based on the seed  “s when one is being cursed.
    • the fact that the spirit of the spirit of the body and still the stands of the world
    • the fact that the last is a prostion of the conceal the investion, there is our grust
    • the fact them strongests! it is incoke when it is liuderan of human particiay
    • the fact that she could as eudop bkems to overcore and dogmofuld
    • In this case, the first 2-3 words are the same, and random, semi-structured text. That’s promising, since the compare would be on the seed plus the generated text.
  • Today, see how fast a “Shining” (All work and no play makes Jack a dull boy.) text can be learned and then try each keyword as a start. As we move through the sentence, the probability of the next words should change.
    • Generate the text set
    • Train the Nietzsche model on the new text. Done. Here are examples with one epoch and a batch size of 32, with a temperature of 1.0:
      ----- diversity: 0.2
      ----- Generating with seed: "es jack a 
      dull boy all work and no play"
      es jack a 
      dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
      
      ----- diversity: 0.5
      ----- Generating with seed: "es jack a 
      dull boy all work and no play"
      es jack a 
      dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
      
      ----- diversity: 1.0
      ----- Generating with seed: "es jack a 
      dull boy all work and no play"
      es jack a 
      dull boy all work and no play makes jack a dull boy anl wory and no play makes jand no play makes jack a dull boy all work and no play makes jack a 
      
      ----- diversity: 1.2
      ----- Generating with seed: "es jack a 
      dull boy all work and no play"
      es jack a 
      dull boy all work and no play makes jack a pull boy all work and no play makes jack andull boy all work and no play makes jack a dull work and no play makes jack andull

      Note that the errors start with a temperature of 1.0 or greater

    • Rewrite the last part of the code to generate text based on each word in the sentence.
      • So I tried that and got gobbledygook. The issues is that the prediction only works on waveform-sized chunks. To verify this, I created a seed from the input text, truncating it to maxlen (20 in this case):
        sentence = "all work and no play makes jack a dull boy"[:maxlen]

        That worked, but it means that the character-based approach isn’t going to work

        ----- temperature: 0.2
        ----- Generating with seed: [all work and no play]
        all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
        
        ----- temperature: 0.5
        ----- Generating with seed: [all work and no play]
        all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
        
        ----- temperature: 1.0
        ----- Generating with seed: [all work and no play]
        all work and no play makes jack a dull boy all work and no play makes jack a dull boy pllwwork wnd no play makes 
        
        ----- temperature: 1.2
        ----- Generating with seed: [all work and no play]
        all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes

         

    • Based on this result and the ensuing chat with Aaron, we’re going to revisit the whole LSTM with numbers and build out a process that will support words instead of characters.
  • Looking for CMAC models, I found Self Organizing Feature Maps at NeuPy.com:
  • Here’s How Much Bots Drive Conversation During News Events
    • Late last week, about 60 percent of the conversation was driven by likely bots. Over the weekend, even as the conversation about the caravan was overshadowed by more recent tragedies, bots were still driving nearly 40 percent of the caravan conversation on Twitter. That’s according to an assessment by Robhat Labs, a startup founded by two UC Berkeley students that builds tools to detect bots online. The team’s first product, a Chrome extension called BotCheck.me, allows users to see which accounts in their Twitter timelines are most likely bots. Now it’s launching a new tool aimed at news organizations called FactCheck.me, which allows journalists to see how much bot activity there is across an entire topic or hashtag

Phil 10.29.18

7:00 – 5:00 ASRC PhD

  • This looks like a Big Deal from Google – Working together to apply AI for social good
    • Google.org is issuing an open call to organizations around the world to submit their ideas for how they could use AI to help address societal challenges. Selected organizations will receive support from Google’s AI experts, Google.org grant funding from a $25M pool, credit and consulting from Google Cloud, and more.
    • We look forward to receiving your application on or before 11:59 p.m. PT on January 22, 2019, and we encourage you to apply early given that we expect high volume within the last few hours of the application window. Thank you!
    • Application Guide
    • Application form (can’t save, compose offline using guide, above)
  • Finished my writeup on Meltdown
  • Waiting for a response from Antonio
  • Meeting with Don at 9:00 to discuss BAA partnership.
    • Don is comfortable with being PI or co-PI, whichever works best. When we call technical POCs, we speak on his behalf
    • We discussed how he could participate with the development of theoretical models based on signed graph Laplacians creating structures that can move in belief space. He thinks the idea has merit, and can put in up to 30% of his time on mathematical models and writing
    • ASRC has already partnered with UMBC. ASRC would sub to UMBC
    • Ordinarily, IP is distributed proportional to the charged hours
    • Don has access to other funding vehicles that can support the Army BAA, but this would make things more complicated. These should be discussed if we can’t make a “clean” agreement that meets our funding needs
  • Pinged Brian about his defense.
  • Some weekend thoughts
    • Opinion dynamics systems describe how communication within a network occurs, but disregards the motion of the network as a whole. In cases when the opinions converge, the network is stiff.
    • Graph laplacians could model “othering” by having negative weights. It looks like these are known as signed laplacians, and useful to denote difference. The trick is to discover the equations of motion. How do you model a “social particle”?
  • Just discovered the journal Swarm Intelligence
    • Swarm Intelligence is the principal peer reviewed publication dedicated to reporting research and new developments in this multidisciplinary field. The journal publishes original research articles and occasional reviews on theoretical, experimental, and practical aspects of swarm intelligence. It offers readers reports on advances in the understanding and utilization of systems that are based on the principles of swarm intelligence. Emphasis is given to such topics as the modeling and analysis of collective biological systems; application of biological swarm intelligence models to real-world problems; and theoretical and empirical research in ant colony optimization, particle swarm optimization, swarm robotics, and other swarm intelligence algorithms. Articles often combine experimental and theoretical work.
  • I think it’s time to start ramping up on the text generation!
      • Updated my home box to tensorflow 1.11.0. Testing to see if it still works using the Deep Learning with Keras simple_nueral_net.py example. Hasn’t broken (yet…), but is taking a long time… Worked! And it’s much faster the second time. Don’t know why that is and can’t find anything online that talks to that.
        Loss: 0.5043802047491074
        Accuracy: 0.8782
        Time =  211.42629722093085
      • Found this keras example for generating Nietsche

     

    • Trying it out. This may be a overnight run… But it is running.
  • Had a good discussion with Aaron about how mapmaking could be framed as an ML problem. More writeup tomorrow.

Phil 10.17.18

7:00 – 4:00 Antonio Workshop

Phil 10.8.18

7:00 – 12:00, 2:00 – 5:00 ASRC Research

  • Finish up At Home in the Universe notes – done!
  • Get started on framing out Antonio’s paper – good progress!
    • Basically, Aaron and I think there is a spectrum of interaction that can occur in these systems. At one end is some kind of market, where communication is mediated through price, time, and convenience to the transportation user. At the other is a more top down, control system way of dealing with this. NIST RCS would be an example of this. In between these two extremes are control hierarchies that in turn interact through markets
  • Wrote up some early thoughts on how simulation and machine learning can be a thinking fast and slow solution to understandable AI

Phil 9.28.18

7:30 – 4:00 ASRC MKT

  • Stumbled on this podcast this morning: How Small Problems Snowball Into Big Disasters
  • How to Prepare for a Crisis You Couldn’t Possibly Predict
  • I’m trying to think about how this should be applied to human/machine ecologies. I think that simulation is really important because it lets one model patch compare itself against another model without real-world impacts. This has something to do with a shared, multi-instance environment simulation as well. The environment provides one level of transparent interaction, but there also needs to be some level of inadvertent social information that shows some insight into how a particular system is working.
    • When the simulation and the real world start to diverge for a system, that needs to be signaled
    • Systems need to be able to “look into” other simulations and compare like with like. So a tagged item (bicycle) in one sim is the same in another.
    • Is there an OS that hands out environments?
    • How does a decentralized system coordinate? Is there an answer in MMOGs?
  • Kate Starbird’s presentation was interesting as always. We had a chance to talk afterwards, and she’d like to see our work, so I’ve sent her links to the last two papers.
    I also met Bill Braniff, who is the director of the UMD Study of Terrorism and responses to Terrorism. He got papers too, with a brief description about how mapping could aid in the detection of radicalization patterns
    Then at lunch, I had a chance to meet with Roger Bostelman from NIST. He’s interested in writing standards for fleet and swarm vehicles, and is interested in making sure that standards mitigate the chance of stampeding autonomous vehicles, so I sent him the Blue Sky draft.
    And lastly, I got a phone call from Aaron who says that our project will be terminated December 31, after which there will be no more IR&D at ASRC. It was a nice run while it lasted. And they may change their minds, but I doubt it.