Category Archives: Machine Learning

Phil 1.7.19

7:00 – 5:00 ASRC

  • Call Tim – The week looks dry
  • Schedule Physical – try tomorrow?
  • Continue with A guided tour through a dirt-simple “deep” neural network. Finished learning, started graphing
  • Downloaded the latest antibubbles and ran processing
  • More financial forecasting?
  • Sprint review?
    • Prepping by adding in all the things that I wound up doing
  • Worked on getting Aaron’s code working, which required installing MSVC 2017 which required me redistributing apps to clear up space on the SSD drive.

Phil 1.4.19

7:00 – 5:30 ASRC NASA

  • Ping Shimei – Tuesday at 4:00
  • Ping Don – Wednesday at 4:00
  • Hammerhead – print shipping label. Use Karoo box on bookshelf
  • Antibubbles is coming along really well. If Saturday is really going to be a rainy day, maybe get started on the PHP story code? Note: Check in the html source how pictures are referenced
  • Try changing the error chart so that each sample is a seperate line (along with the average?) Done. I like this a lot! outputerror
  • Walk through SimpleLayer in the order that it’s used
    • Creation
    • Training
    • Learning
    • Graphing
  • Beat on the prediction plumbing with Aaron. The parts that collect the error and produce a forecast are there, but not working right?

Phil 1.3.19

7:00 – 5:30 ASRC NASA

  • Realized that error calculation for Holt can simply be error from the horizontal line from each prediction. There would be a distribution for T-1, T-2, T-3 … T-n. Later, when we get fancy, we can use the phi curve. So dumb.
  • Continuing my deep neural network writeup
  • Continuing Holt-Winters work with Aaron – probability distributions!
    • Ok, I think I’ve got this stupid thing figured out. Below is a screenshot of the table of predictions. These predictions are based on applying exponential smoothing to a history of sine waves:
      • sinewaves
    • The table consists of a set of predictions and their observed values (not sure why the time steps in the column on the left are duplicated. Need to fix that:
      • predictions
    • I can then make a table that contains each prediction as a line stretching into the future:
      • populations
    • This “population of prediction errors” can then be used to calculate the amount of error in our forecast:
      • charts
    • This will work for any of the prediction schemes. We just have to store all predictions and observed.
    • Here’s the spreadsheet: ExponentialSmoothing2
  • Ping Shimei – campus closed
  • Ping Don – campus closed
  • Hammerhead 

 

Phil 1.2.19

Gotta get used to writing dates

7:00 – 5:00 ASRC PhD NASA

  • Continuing my deep neural network writeup
  • Continuing Holt-Winters work with Aaron
  • Continuing to read Clockwork Muse
    • Martindale spends the book talking about poetry, but I’m listening to Kind of Blue right now and I realize that Jazz is similar. The thought leaders are in some state where they are paying attention to each other and not much else. That’s how we get a trajectory that leads to Bitches Brew.
    • I think this is probably a generally applicable pattern. The thing that I need to think through is how a small group of highly creative people in what could be described as an echo-ish chamber differs from mass activity in a large attractor like authoritarianism.
    • We want to know how many dimensions are needed to account for the similarities among poets. Fortunately, once we have correlated all of the poets with one another, a procedure called multidimensional scaling will tell us just this.* Multidimensional scaling tells us that the twenty-one French poets differ along three· main dimensions. These three dimensions account for 94 percent of the similarity matrix. (Page 114)
  • I have spent the day doing PHENOMENALLY STUPID MATH (Holt exponential smoothing with drecksnest damping). Excel file: ExponentialSmoothing2

Phil 12.31.18

7:00 – 4:30 ASRC NASA

  • Set up appt for physical
  • This is fabulous! Seeing Theory
    • Seeing Theory was created by Daniel Kunin while an undergraduate at Brown University. The goal of this website is to make statistics more accessible through interactive visualizations (designed using Mike Bostock’s JavaScript library D3.js).
  • Working on cleaning up and validating my Very Simple Perceptron class. I think I’m going to write the whole thing up as its own blog post
  • Why Trump Reigns as King Cyrus
    • This isn’t the religious right we thought we knew. The Christian nationalist movement today is authoritarian, paranoid and patriarchal at its core. They aren’t fighting a culture war. They’re making a direct attack on democracy itself.

 

Phil 12.24.18

PhD 7:00 – 3:00

Phil 12.21.18

7:00 – 4:30 ASRC PhD/NASA/NOAA

  • Spatial Representations in the Human Brain
    • While extensive research on the neurophysiology of spatial memory has been carried out in rodents, memory research in humans had traditionally focused on more abstract, language-based tasks. Recent studies have begun to address this gap using virtual navigation tasks in combination with electrophysiological recordings in humans. These studies suggest that the human medial temporal lobe (MTL) is equipped with a population of place and grid cells similar to that previously observed in the rodent brain. Furthermore, theta oscillations have been linked to spatial navigation and, more specifically, to the encoding and retrieval of spatial information. While some studies suggest a single navigational theta rhythm which is of lower frequency in humans than rodents, other studies advocate for the existence of two functionally distinct delta–theta frequency bands involved in both spatial and episodic memory. Despite the general consensus between rodent and human electrophysiology, behavioral work in humans does not unequivocally support the use of a metric Euclidean map for navigation. Formal models of navigational behavior, which specifically consider the spatial scale of the environment and complementary learning mechanisms, may help to better understand different navigational strategies and their neurophysiological mechanisms. Finally, the functional overlap of spatial and declarative memory in the MTL calls for a unified theory of MTL function. Such a theory will critically rely upon linking task-related phenomena at multiple temporal and spatial scales. Understanding how single cell responses relate to ongoing theta oscillations during both the encoding and retrieval of spatial and non-spatial associations appears to be key toward developing a more mechanistic understanding of memory processes in the MTL.
  • Three Kinds of Spatial Cognition
    • Nora S. Newcombe (Scholar)
    • Spatial cognition is often (but wrongly) conceptualized as a single domain of cognition. However, humans function in more than one way in the spatial world. We navigate, as do all mobile animals, but we also manipulate objects using distinctive hands with opposable thumbs, unlike other species. In fact, an important characteristic of human adaptation is the ability to invent tools. Of course, another central asset is human symbolic ability, which includes the ability to spatialize thought in abstractions such as maps, graphs, and analogies. Thus, there are at least three kinds of spatial cognition with three separable functions. Navigation involves moving around the environment to find food and shelter, and to avoid danger. It draws on several interconnected neural subsystems that track movement and encode the location of external entities with respect to each other and the moving self (i.e., extrinsic coding), and it integrates these inputs to achieve best‐possible estimates. Human navigation is characterized by a great deal of individual variation. Tool use and invention involves the mental representation and transformation of the shapes of objects (i.e., intrinsic coding). It relies on substantially different neural subsystems than navigation. Like navigation, it shows marked individual differences, which are related to variations in learning in science, technology, engineering, and mathematics (STEM). Spatialization is an aspect of human symbolic skill that cuts across multiple cognitive domains and involves many kinds of spatial symbol systems, including language, metaphor, analogy, gesture, sketches, diagrams, graphs, maps, and mental images. These spatial symbol systems are vital to many kinds of learning, including in STEM. Future research on human spatial cognition needs to further delineate the origins, development, neural substrates, variability, and malleability of navigation, tool use, and abstract spatial thinking, as well as their interconnections to each other and to other cognitive skills.
  • A little bit adding to Normal Accidents notes
  • Working on saving out to history and item table
  • Turns out that if you want to retrieve floats from a postgres table using psycopg2, you have to register a custom handler:
    DEC2FLOAT = psycopg2.extensions.new_type(
        psycopg2.extensions.DECIMAL.values,
        'DEC2FLOAT',
        lambda value, curs: float(value) if value is not None else None)
    psycopg2.extensions.register_type(DEC2FLOAT)
  • Learned about dollar-quoting as per https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING
  • To execute big inserts with psycopg2, you need to set autocommit = True
    self.conn = psycopg2.connect(config_str)
    self.conn.autocommit = True
  • And you should use try/catch
    def query_no_result(self, sql: str) -> bool:
        try:
            self.cursor.execute(sql)
            desc = self.cursor.statusmessage
            return True
        except:
            print("unable to execute: {}".format(sql))
            return False

 

Phil 12.19.18

7:00 – 4:30 ASRC PhD/NASA

  • I think the IEEE paper with Antonio should be something on the math behind diversity/chaos <-> ensembles <-> hierarchies/stampedes
  • Continuing with Normal Accidents
  • Thinking about costly signalling (economics, and more generally) WRT stampede theory. It’s a form of friction. Proof-of-work could be adaptively added to a system to inhibit stampedes?
    • In contract theorysignalling is the idea that one party (termed the agent) credibly conveys some information about itself to another party (the principal). For example, in Michael Spence’s job-market signalling model, (potential) employees send a signal about their ability level to the employer by acquiring education credentials. The informational value of the credential comes from the fact that the employer believes the credential is positively correlated with having greater ability and difficult for low ability employees to obtain. Thus the credential enables the employer to reliably distinguish low ability workers from high ability workers.
  • Tweak optimizer. Change callback method to replace_with_your_method(), and add some documentation on how to use the classes
  • Move clustering paper to LaTex folder – done!
  • Pinged Antonio about the new paper ideas
  • More predictive analytics? Kind of – setting up to read and write out time series to db
  • Upgrading my postgreSQL, to read/write from a table
    • Installed drivers and python packages
    • Reading and writing in the IDE
      • Created and populated a dummy data table
    • Reading in python. Writing is next
  • I was thinking that if we run out of D&D data for other map configurations, that I can use the current data with some repurposing (replace orc with xxx) to create conversations about other environments using human speech. Kind of like PCA for DNA.

Phil 12.17.18

7:00 – 4:30 ASRC NASA/PhD

  • Ted Radio Hour interview with Margaret Heffernan, who spoke about her book, Willful Blindness:
    • “Companies that have been studied for willful blindness can be asked questions like, are there issues at work that people are afraid to raise? And when academics have done studies like this of corporations in the United States, what they find is 85 percent of people say yes. Eighty-five percent of people know there’s a problem, but they won’t say anything. And when I duplicated the research in Europe, asking all the same questions, I found exactly the same number. And what’s really interesting is that when I go to companies in Switzerland, they tell me this is a uniquely Swiss problem. And when I go to Germany, they say, oh yes, this is the German disease. And when I go to companies in England they say, oh yeah, the British are really bad at this. And the truth is, this is a human problem. We’re all, under certain circumstances, willfully blind.”
    • I’ve been thinking about this a lot because when I say, well, why don’t people speak up? What I get is, oh, it’s the culture. And I think, well, what is the culture? The culture is the accumulation of everybody’s actions. And in many of the organizations I work with, change starts in very unexpected places because people just decide, I want to do this or I want to try this. And then they discover they don’t get shot. And then they discover that, actually, now, they’ve got a really exciting project. You know, I think the most dangerous thing in organizations is silence. It’s all those brains whizzing around full of observations and insight and ideas that are not being articulated.
    • I think that that the 15% who do speak out are Nomads. They are mis-aligned with the culture and as such it’s 1) Easier to see problems and solutions. 2) an inability to not behave independently.
  • Bayesian Layers: A Module for Neural Network Uncertainty
    • We describe Bayesian Layers, a module designed for fast experimentation with neural network uncertainty. It extends neural network libraries with layers capturing uncertainty over weights (Bayesian neural nets), pre-activation units (dropout), activations (“stochastic output layers”), and the function itself (Gaussian processes). With reversible layers, one can also propagate uncertainty from input to output such as for flow-based distributions and constant-memory backpropagation. Bayesian Layers are a drop-in replacement for other layers, maintaining core features that one typically desires for experimentation. As demonstration, we fit a 10-billion parameter “Bayesian Transformer” on 512 TPUv2 cores, which replaces attention layers with their Bayesian counterpart.
  • Continuing with Normal Accidents
  • Nice interactive on disinformation on Twitter
  • The universal decay of collective memory and attention
    • Collective memory and attention are sustained by two channels: oral communication (communicative memory) and the physical recording of information (cultural memory). Here, we use data on the citation of academic articles and patents, and on the online attention received by songs, movies and biographies, to describe the temporal decay of the attention received by cultural products. We show that, once we isolate the temporal dimension of the decay, the attention received by cultural products decays following a universal biexponential function. We explain this universality by proposing a mathematical model based on communicative and cultural memory, which fits the data better than previously proposed log-normal and exponential models. Our results reveal that biographies remain in our communicative memory the longest (20–30 years) and music the shortest (about 5.6 years). These findings show that the average attention received by cultural products decays following a universal biexponential function.
  • Zach walkthough
    • Yarn Workspaces
    • NextJS – Tools for developing React Apps – check the github repo to see, for example, how to roll your own web server
    • REACT hooks api
  • Got the basic recursion piece of the optimizer working right. Works for ints, floats, and strings:
    def cascading_step(self):
        self.cur_val = self.range_array[self.index]
        print("{} cur_val = {}".format(self.name, self.cur_val))
    
        child_complete = True
        if self.child:
            child_complete = self.child.cascading_step()
    
        if child_complete:
            self.index += 1
            if self.index >= len(self.range_array):
                self.index = 0
                return True
        return False
  • And here’s the first working test:
    v3 cur_val = v3_0
    v2 cur_val = v2_0
    v1 cur_val = v1_0
    step 0 -----------
    v3 cur_val = v3_0
    v2 cur_val = v2_0
    v1 cur_val = v1_1
    step 1 -----------
    v3 cur_val = v3_0
    v2 cur_val = v2_0
    v1 cur_val = v1_2
    step 2 -----------
    v3 cur_val = v3_0
    v2 cur_val = v2_0
    v1 cur_val = v1_3
    step 3 -----------
    v3 cur_val = v3_0
    v2 cur_val = v2_1
    v1 cur_val = v1_0

     

Phil 12.14.18

7:00 – 4:30 ASRC PhD/NASA

  • Sent Greg a couple of quick notes on using CNNs to match bacteria to phages.
  • Continuing with Normal Accidents
  • A Digital Test of the News: Checking the Web for Public Facts – Workshop report, December 2018
    • The Digital Test of the News workshop brought together digital sociologists, data visualisation and new media researchers at the Centre for Interdisciplinary Methodologies at the University of Warwick on 8 and 9 May 2018. The workshop is part of a broader research collaboration between the Centre for Interdisciplinary Methodologies and the Public Data Lab which investigates the changing nature of public knowledge formation in digital societies and develops inventive methods to capture and visualise knowledge dynamics online. Below we outline the workshop’s aims and outcomes.
  • Added plots to the NN code. Everything seems to look right. Looking at the individual weights in a layer is very informative. Need to add this kind of plotting to our keras code somehow:
  • Changing the coherence code so that the row values are zero or one. Actually, as the amount of data grows, BOW is getting more useful raw. This spreadsheet shows all posts, including the DM. Note that the word frequency is power law (R squared = .9352): All_posts
  • Started the optimizer and excel utils classes
  • NOAA meeting
  • NOAA meeting 2
  • Some thoughts from Aaron on our initial ML approach
    • I think for the first pass we do 1-2 models based on contract type focused on the $500k+ award contracts (120ish total).
    • We construct the inputs like sentences in the word generation LSTM with padded lengths equal to the longest running contract of that type, sorted by length. The model can be tested against current contracts held out in a test set using point by point prediction so we can show accuracy of the model against existing data and use that to set our accuracy threshold.
    • My guess is this will be at least an ORF/PAC model (two different primary contract types) which we can work on tuning with the learning_optimizer.py to get as accurate as possible in the timeframe we have.
    • One of the things we advertise as “next steps” is a detailed analysis of contracts based on similarity measures to identify a series of more accurate models. We can pair this with additional models such as Exponential Smoothing and ARIMA which use fundamentally the exact same pipeline.
    • The GUI will be plumbed up to show these analytic outputs on a per contract basis and we can show by the end of January a simple linear model and the LSTM model to demonstrate how Exponential Smoothing / ARIMA or model averages could be displayed. Once we have these outputs we can take the top 5 highest predicted UDO and display in a summary page so they can use those as a launching off point.
    • If we do it this way it means we only have to focus on the completion of TimeSeriesML LSTM and it’s data pipeline with a maximum of 2 initial models (contract type). I think that is a far more reasonable thing to complete in the timeframe and should still be really exciting to show off.

Phil 12.12.18

7:00 – 4:30 ASRC NASA/PhD

  • Do a dungeon analytic with new posts and DM for Aaron – done!
  • Send email to Shimei for registration and meeting after grading is finished
  • Start review of Normal Accidents – started!
  • Debug NN code – in process. Very tricky figuring out the relationships between the layers in backpropagation
  • Sprint planning
  • NASA meeting
  • Talked to Zach about the tagging project. Looks good, but I wonder how much time we’ll have. Got a name though – TaggerML

Phil 12.11.18

7:00 – 4:30 ASRC PhD/NASA

mercator_projection

Somehow, this needs to get into a discussion of the trustworthiness of maps

  • I realized that we can hand-code these initial dungeons, learn a lot and make this a baseline part of the study. This means that we can compare human and machine data extraction for map making. My initial thoughts as to the sequence are:
    • Step 1: Finish running the initial dungeon
    • Step 2: researchers determine a set of common questions that would be appropriate for each room. Something like:
      • Who is the character?
      • Where is the character?
      • What is the character doing?
      • Why is the character doing this?
    • Each answer should also include a section of the text that the reader thinks answers that question. Once this has been worked out on paper, a simple survey website (simpler) can be built that automates this process and supports data collection at moderate scales.
    • Use answers to populate a “Trajectories” sheet in an xml file and build a map!
    • Step 3: Partially automate the extraction to give users a generated survey that lets them select the most likely answer/text for the who/where/what/why questions. Generate more maps!
    • Step 4: Full automation
  • Added these thoughts to the analysis section of the google doc
  • The 11th International Natural Language Generation Conference
    • The INLG conference is the main international forum for the presentation and discussion of all aspects of Natural Language Generation (NLG), including data-to-text, concept-to-text, text-to-text and vision to-text approaches. Special topics of interest for the 2018 edition included:
      • Generating Text with Affect, Style and Personality,
      • Conversational Interfaces, Chatbots and NLG, and
      • Data-driven NLG (including the E2E Generation Challenge)
  • Back to grokking DNNs
    • Still building a SimpleLayer class that will take a set of neurons and create a weight array that will point to the next layer
    • array formatting issues. Tricky
    • I think I’m done enough to start debugging. Tomorrow
  • Sprint review

Phil 12.10.18

7:00 – 5:30 ASRC NASA/PhD

  • For my morning academic work, I am cooking delicious things.
  • There is text in the dungeon! Here’s what happened when I ran the analytics against 3 posts and held back the dungeon master. Rather than put up a bunch of screenshots, here’s the spreadsheet: Day_1_Dungeon_1
  • Russell Richie (twitter) (Scholar) One of my favorite results in the paper is that you can compress the embeddings 10x or more while preserving prediction performance, suggesting that the type of knowledge used to make these kind of judgments may only vary along a relative handful of latent dimensions.
    • dtwstpluuaasaa4
    • dtwqzvwv4aa7kot
  • Ok, back to grokking DNNs
    • Building a SimpleLayer class that will take a set of neurons and create a weight array that will point to the next layer
  • Fika and meeting with Wayne
    • Ade might be interested in doing some coding work!
    • Went over the initial results spreadsheet with Wayn. Overall, progress seems on track. He had an additional thought for venues that I didn’t note.
    • Ping Shimei about 899