Category Archives: Lit Review

Phil 1.30.19

7:00 – 4:00 ASRC IRAD

Teaching a neural network to drive a car. It’s a simple network with a fixed number of hidden nodes (no NEAT), and no bias. Yet it manages to drive the cars fast and safe after just a few generations. Population is 650. The network evolves through random mutation (no cross-breeding). Fitness evaluation is currently done manually as explained in the video.

  • This interactive balance between evolution and learning is exactly the sort of interaction that I think should be at the core of the research browser. The only addition is the ability to support groups collaboratively interacting with the information so that multiple analysts can train the system.
  • A quick thing on the power of belief spaces from a book review about, of all things, Hell. One of the things that gives dimension to a belief space is the fact that people show up.
    • Soon, he’d left their church and started one of his own, where he proclaimed his lenient gospel, pouring out pity and anger for those Christians whose so-called God was a petty torturer, until his little congregation petered out. Assured salvation couldn’t keep people in pews, it turned out. The whole episode, in its intensity and its focus on the stakes of textual interpretation, was reminiscent of Lucas Hnath’s recent play “The Christians,” about a pastor who comes out against Hell and sparks not relief but an exegetical nightmare.
  • Web Privacy Measurement in Real-Time Bidding Systems. A Graph-Based Approach to Rtb System Classification.
    • In the doctoral thesis, Robbert J. van Eijk investigates the advertisements online that seem to follow you. The technology enabling the advertisements is called Real-Time Bidding (RTB). An RTB system is defined as a network of partners enabling big data applications within the organizational field of marketing. The system aims to improve sales by real-time data-driven marketing and personalized (behavioral) advertising. The author applies network science algorithms to arrive at measuring the privacy component of RTB. In the thesis, it is shown that cluster-edge betweenness and node betweenness support us in understanding the partnerships of the ad-technology companies. From our research it transpires that the interconnection between partners in an RTB network is caused by the data flows of the companies themselves due to their specializations in ad technology. Furthermore, the author provides that a Graph-Based Methodological Approach (GBMA) controls the situation of differences in consent implementations in European countries. The GBMA is tested on a dataset of national and regional European news websites.
  • Continuing with Tkinter and ttk
      • That was easy!
        • app3
      • And now there is a scrollbar, which is a little odd to add. They are separate components that you have to explicitly link and place in the same ttk.Frame:
    # make the frame for the listbox and the scroller to live in
    self.lbox_frame = ttk.Frame(self.content_frame)
    
    # place the frame 
    self.lbox_frame.grid(column=0, row=0, rowspan=6, sticky=(N,W,E,S))
    
    # create the listbox and the scrollbar
    self.lbox = Listbox(self.lbox_frame, listvariable=self.cnames, height=5)
    lbox_scrollbar = ttk.Scrollbar(self.lbox_frame, orient=VERTICAL, command=self.lbox.yview)
    
    # after both components have been made, have the lbox point at the scroller
    self.lbox['yscrollcommand'] = lbox_scrollbar.set

     

    • If you get this wrong, then you can end up with a scrollbar in some other Frame, connected to your target. Here’s what happens if the parent is root:
      • badscroller
    • And here is where it’s in the lbox frame as in the code example above:
      • goodscroller
    • The fully formed examples are no more. Putting together a menu app with text. Got the text running with a scrollbar, and everything makes sense. Next is the menus…scrollingtext
    • Here’s the version of the app with working menus: slackdbio
  • For seminar: Predictive Analysis by Leveraging Temporal User Behavior and User Embeddings
    • The rapid growth of mobile devices has resulted in the generation of a large number of user behavior logs that contain latent intentions and user interests. However, exploiting such data in real-world applications is still difficult for service providers due to the complexities of user behavior over a sheer number of possible actions that can vary according to time. In this work, a time-aware RNN model, TRNN, is proposed for predictive analysis from user behavior data. First, our approach predicts next user action more accurately than the baselines including the n-gram models as well as two recently introduced time-aware RNN approaches. Second, we use TRNN to learn user embeddings from sequences of user actions and show that overall the TRNN embeddings outperform conventional RNN embeddings. Similar to how word embeddings benefit a wide range of task in natural language processing, the learned user embeddings are general and could be used in a variety of tasks in the digital marketing area. This claim is supported empirically by evaluating their utility in user conversion prediction, and preferred application prediction. According to the evaluation results, TRNN embeddings perform better than the baselines including Bag of Words (BoW), TFIDF and Doc2Vec. We believe that TRNN embeddings provide an effective representation for solving practical tasks such as recommendation, user segmentation and predictive analysis of business metrics.

Phil 1.29.19

7:00 – 5:30 ASRC IRAD

  • Theories of Error Back-Propagation in the Brain
    • This review article summarises recently proposed theories on how neural circuits in the brain could approximate the error back-propagation algorithm used by artificial neural networks. Computational models implementing these theories achieve learning as efficient as artificial neural networks, but they use simple synaptic plasticity rules based on activity of presynaptic and postsynaptic neurons. The models have similarities, such as including both feedforward and feedback connections, allowing information about error to propagate throughout the network. Furthermore, they incorporate experimental evidence on neural connectivity, responses, and plasticity. These models provide insights on how brain networks might be organised such that modification of synaptic weights on multiple levels of cortical hierarchy leads to improved performance on tasks.
  • Interactive Machine Learning by Visualization: A Small Data Solution
    • Machine learning algorithms and traditional data mining process usually require a large volume of data to train the algorithm-specific models, with little or no user feedback during the model building process. Such a “big data” based automatic learning strategy is sometimes unrealistic for applications where data collection or processing is very expensive or difficult, such as in clinical trials. Furthermore, expert knowledge can be very valuable in the model building process in some fields such as biomedical sciences. In this paper, we propose a new visual analytics approach to interactive machine learning and visual data mining. In this approach, multi-dimensional data visualization techniques are employed to facilitate user interactions with the machine learning and mining process. This allows dynamic user feedback in different forms, such as data selection, data labeling, and data correction, to enhance the efficiency of model building. In particular, this approach can significantly reduce the amount of data required for training an accurate model, and therefore can be highly impactful for applications where large amount of data is hard to obtain. The proposed approach is tested on two application problems: the handwriting recognition (classification) problem and the human cognitive score prediction (regression) problem. Both experiments show that visualization supported interactive machine learning and data mining can achieve the same accuracy as an automatic process can with much smaller training data sets.
  • Shifted Maps: Revealing spatio-temporal topologies in movement data
    • We present a hybrid visualization technique that integrates maps into network visualizations to reveal and analyze diverse topologies in geospatial movement data. With the rise of GPS tracking in various contexts such as smartphones and vehicles there has been a drastic increase in geospatial data being collect for personal reflection and organizational optimization. The generated movement datasets contain both geographical and temporal information, from which rich relational information can be derived. Common map visualizations perform especially well in revealing basic spatial patterns, but pay less attention to more nuanced relational properties. In contrast, network visualizations represent the specific topological structure of a dataset through the visual connections of nodes and their positioning. So far there has been relatively little research on combining these two approaches. Shifted Maps aims to bring maps and network visualizations together as equals. The visualization of places shown as circular map extracts and movements between places shown as edges, can be analyzed in different network arrangements, which reveal spatial and temporal topologies of movement data. We implemented a web-based prototype and report on challenges and opportunities about a novel network layout of places gathered during a qualitative evaluation.
    • Demo!
  • More TkInter.
    • Starting Modern Tkinter for Busy Python Developers
    • Spent a good deal of time working through how to get an image to appear. There are two issues:
      • Loading file formats:
        from tkinter import *
        from tkinter import ttk
        from PIL import Image, ImageTk
      • This is because python doesn’t know natively how to load much beyond gif, it seems. However, there is the Python Image Library, which does. Since the original PIL is deprecated, install Pillow instead. It looks like the import and bindings are the same.
      • dealing with garbage collection (“self” keeps the pointer alive):
        image = Image.open("hal.jpg")
        self.photo = ImageTk.PhotoImage(image)
        ttk.Label(mainframe, image=self.photo).grid(column=1, row=1, sticky=(W, E))
      • The issue is that if the local variable that contains the reference goes out of scope, the garbage collector (in Tkinter? Not sure) scoops it up before the picture can even appear, causing the system (and the debugger) to try to draw a None. If you make the reference global to the class (i.e. self.xxx), then the reference is maintained and everything works.
    • The relevant stack overflow post.
    • A pretty picture of everything working:
      • app
  • The 8.6.9 Tk/Ttk documentation
  • Looks like there are some WYSIWYG tools for building pages. PyGubu looks like its got the most recent activity
  • Now my app resizes on grid layouts: app2

Phil 1.26.19

Tangled Worldview Model of Opinion Dynamics

  • We study the joint evolution of worldviews by proposing a model of opinion dynamics, which is inspired in notions from evolutionary ecology. Agents update their opinion on a specific issue based on their propensity to change — asserted by the social neighbours — weighted by their mutual similarity on other issues. Agents are, therefore, more influenced by neighbours with similar worldviews (set of opinions on various issues), resulting in a complex co-evolution of each opinion. Simulations show that the worldview evolution exhibits events of intermittent polarization when the social network is scale-free. This, in turn, trigger extreme crashes and surges in the popularity of various opinions. Using the proposed model, we highlight the role of network structure, bounded rationality of agents, and the role of key influential agents in causing polarization and intermittent reformation of worldviews on scale-free networks.
  • Saved to Flocking and Herding

Phil 1.25.19

7:00 – 5:30 ASRC NASA/PhD

    • Practical Deep Learning for Coders, v3
    • Continuing Clockwork Muse (reviews on Amazon are… amazingly thorough) , which is a slog but an interesting slog. Martindale is talking about how the pattern of increasing arousal potential and primordial/stylistic content is self-similar across scales of the individual work to populations and careers.
    • Had a bunch of thoughts about primordial content and the ending of the current dungeon.
    • Last day of working on NOAA. I think there is a better way to add/subtract months here in stackoverflow
    • Finish review of CHI paper. Mention Myanmar and that most fake news sharing is done by a tiny fraction of the users, so finding the heuristics of those users is a critical question. Done!
    • Setting up Fake news on Twitter during the 2016 U.S. presidential election as the next paper in the queue. The references look extensive (69!) and good.
    • TFW you don’t want any fancy modulo in your math confusing you:
      def add_month(year: int, month: int, offset: int) -> [int, int]:
          # print ("original date = {}/{}, offset = {}".format(month, year, offset))
          new_month = month + offset
          new_year = year
      
          while new_month < 1:         new_month += 12         new_year -= 1     while new_month > 12:
              new_month -= 12
              new_year += 1
      
          return new_month, new_year
    • Got a version of the prediction system running on QA. Next week I start something new

 

Phil 1.21.19

9:00 – 3:30 ASRC NASA

woodgrain

Starting the day off right

    • Less than you think: Prevalence and predictors of fake news dissemination on Facebook
      • So-called “fake news” has renewed concerns about the prevalence and effects of misinformation in political campaigns. Given the potential for widespread dissemination of this material, we examine the individual-level characteristics associated with sharing false articles during the 2016 U.S. presidential campaign. To do so, we uniquely link an original survey with respondents’ sharing activity as recorded in Facebook profile data. First and foremost, we find that sharing this content was a relatively rare activity. Conservatives were more likely to share articles from fake news domains, which in 2016 were largely pro-Trump in orientation, than liberals or moderates. We also find a strong age effect, which persists after controlling for partisanship and ideology: On average, users over 65 shared nearly seven times as many articles from fake news domains as the youngest age group.
  • Working with Aaron on plumbing up the analytic pieces
  • Getting interpolation to work. Done! Kind of tricky, I had to iterate in reverse over the range so that I didn’t step on my indexes. By the way, this is how Python code looks if you’re not a Python programmer:
def interpolate(self):
    num_entries = len(self.data)
    # we step backwards so that inserts don't mess up our indexing
    for i in reversed(range(0, num_entries - 1)):
        current = self.data[i]
        next = self.data[i + 1]
        next_month = current.fiscal_month + 1
        if next_month > 12:
            next_month = 1
        if next_month != next.fiscal_month: # NOTE: This will not work if there is exactly one year between disbursements
            # we need to make some entries and insert them in the list
            target_month = next.fiscal_month
            if next.fiscal_month < current.fiscal_month:
                target_month += 12
            #print("interpolating between current month {} and target month {} / next fiscal {}".format(current.fiscal_month, target_month, next.fiscal_month))
            for fm in reversed(range(current.fiscal_month+1, target_month)):
                new_entry = PredictionEntry(current.get_creation_query_result())
                new_entry.override_dates(current.fiscal_year, fm)
                self.data.insert(i+1, new_entry)
                #print("\tgenerateing fiscal_month {}".format(fm))
  • So this:
tuple = 70/1042/402 contract expires: 2018-12-30 23:59:59
fiscalmonth = 9, fiscalyear = 2017, value = 85000.0, balance = 85000.0
fiscalmonth = 12, fiscalyear = 2017, value = -11041.23, balance = 73958.77
fiscalmonth = 1, fiscalyear = 2018, value = 0.0, balance = 73958.77
fiscalmonth = 2, fiscalyear = 2018, value = -28839.7, balance = 45119.07
fiscalmonth = 3, fiscalyear = 2018, value = 171490.55, balance = 216609.62
fiscalmonth = 4, fiscalyear = 2018, value = -14539.61, balance = 202070.01
fiscalmonth = 5, fiscalyear = 2018, value = -15608.09, balance = 186461.92
fiscalmonth = 9, fiscalyear = 2018, value = -60967.36, balance = 125494.56
fiscalmonth = 10, fiscalyear = 2018, value = -14211.78, balance = 111282.78
fiscalmonth = 1, fiscalyear = 2019, value = -23942.68, balance = 87340.1
fiscalmonth = 2, fiscalyear = 2019, value = -35380.81, balance = 51959.29
  • Gets expanded to this
tuple = 70/1042/402 contract expires: 2018-12-30 23:59:59
fiscalmonth = 9, fiscalyear = 2017, value = 85000.0, balance = 85000.0
fiscalmonth = 10, fiscalyear = 2017, value = 85000.0, balance = 85000.0
fiscalmonth = 11, fiscalyear = 2017, value = 85000.0, balance = 85000.0
fiscalmonth = 12, fiscalyear = 2017, value = -11041.23, balance = 73958.77
fiscalmonth = 1, fiscalyear = 2018, value = 0.0, balance = 73958.77
fiscalmonth = 2, fiscalyear = 2018, value = -28839.7, balance = 45119.07
fiscalmonth = 3, fiscalyear = 2018, value = 171490.55, balance = 216609.62
fiscalmonth = 4, fiscalyear = 2018, value = -14539.61, balance = 202070.01
fiscalmonth = 5, fiscalyear = 2018, value = -15608.09, balance = 186461.92
fiscalmonth = 6, fiscalyear = 2018, value = -15608.09, balance = 186461.92
fiscalmonth = 7, fiscalyear = 2018, value = -15608.09, balance = 186461.92
fiscalmonth = 8, fiscalyear = 2018, value = -15608.09, balance = 186461.92
fiscalmonth = 9, fiscalyear = 2018, value = -60967.36, balance = 125494.56
fiscalmonth = 10, fiscalyear = 2018, value = -14211.78, balance = 111282.78
fiscalmonth = 11, fiscalyear = 2018, value = -14211.78, balance = 111282.78
fiscalmonth = 12, fiscalyear = 2018, value = -14211.78, balance = 111282.78
fiscalmonth = 1, fiscalyear = 2019, value = -23942.68, balance = 87340.1
fiscalmonth = 2, fiscalyear = 2019, value = -35380.81, balance = 51959.29
  • Next steps
    • Get the historical data to Aaron’s code
    • Get the predictions and intervals back
    • Store the raw data
    • update and insert the lineitems

 

Phil 1.20.19

I’m thinking that the RPG belief place/space will appear more trustworthy if they associate the spaces with the places (rooms, then discussion) as opposed to building the maps emphasising the problem solving discussion. I think that this will make sense, as everyone will share the room part of the story, while it’s far less likely that everyone will have the same experience in the room. The starting point, shared by all, is the “you are in a room with a troll sleeping next to a chest”. Everything else is a path leading from that starting point.

Created a Google Forms Informed Consent: tinyurl.com/antibubbles-consent

The Einstein summation convention is the ultimate generalization of products such as matrix multiplication to multiple dimensions. It offers a compact and elegant way of specifying almost any product of scalars/vectors/matrices/tensors. Despite its generality, it can reduce the number of errors made by computer scientists and reduce the time they spend reasoning about linear algebra. It does so by being simultaneously clearermore explicitmore self-documentingmore declarative in style and less cognitively burdensome to use

Quantifying echo chamber effects in information spreading over political communication networks

  • Echo chambers in online social networks, in which users prefer to interact only with ideologically-aligned peers, are believed to facilitate misinformation spreading and contribute to radicalize political discourse. In this paper, we gauge the effects of echo chambers in information spreading phenomena over political communication networks. Mining 12 millions of Twitter messages, we reconstruct a network in which users interchange opinions related to the impeachment of former Brazilian President Dilma Rousseff. We define a continuous polarization parameter that allows to quantify the presence of echo chambers, reflected in two communities of similar size with opposite views of the impeachment process. By means of simple spreading models, we show that the capability of users in propagating the content they produce, measured by the associated spreadability, strongly depends on the their polarization. Users expressing pro-impeachment sentiments are capable to transmit information, on average, to a larger audience than users expressing anti-impeachment sentiments. Furthermore, the users’ spreadability is strictly correlated to the diversity, in terms of political polarization, of the audience reached. Our findings demonstrate that political polarization can hinder the diffusion of information over online social media, and shed light upon the mechanisms allowing to break echo chambers.

Can Machines Learn to Detect Fake News? A Survey Focused on Social Media

  • Through a systematic literature review method, in this work we searched classical electronic libraries in order to find the most recent papers related to fake news detection on social medias. Our target is mapping the state of art of fake news detection, defining fake news and finding the most useful machine learning technique for doing so. We concluded that the most used method for automatic fake news detection is not just one classical machine learning technique, but instead a amalgamation of classic techniques coordinated by a neural network. We also identified a need for a domain ontology that would unify the different terminology and definitions of the fake news domain. This lack of consensual information may mislead opinions and conclusions.

Network generation and evolution based on spatial and opinion dynamics components

  • In this paper, a model for a spatial network evolution based on a Metropolis simulation is presented. The model uses an energy function that depends both on the distance between the nodes and the stated preferences. The agents influence their network neighbors opinions using the CODA model. That means each agent has a preference between two options based on its probabilistic assessment of which option is the best one. The algorithm generates realistic networks for opinion problems as well as temporal dynamics for those networks. The transition from a random state to an ordered situation, as temperature decreases, is described. Different types of networks appear based on the relative strength of the spatial and opinion components of the energy.

Phil 1.15.19

7:00 – 3:00 ASRC NASA

  • Cool antibubbles thing: artboard 1
  • Also, I looked into a Slack version of Antibubbles. You can download conversations as JSON files, and I’d need to build (or find) a dice bot.
  • Fake News, Real Money: Ad Tech Platforms, Profit-Driven Hoaxes, and the Business of Journalism
    • Following the viral spread of hoax political news in the lead-up to the 2016 US presidential election, it’s been reported that at least some of the individuals publishing these stories made substantial sums of money—tens of thousands of US dollars—from their efforts. Whether or not such hoax stories are ultimately revealed to have had a persuasive impact on the electorate, they raise important normative questions about the underlying media infrastructures and industries—ad tech firms, programmatic advertising exchanges, etc.—that apparently created a lucrative incentive structure for “fake news” publishers. Legitimate ad-supported news organizations rely on the same infrastructure and industries for their livelihood. Thus, as traditional advertising subsidies for news have begun to collapse in the era of online advertising, it’s important to understand how attempts to deal with for-profit hoaxes might simultaneously impact legitimate news organizations. Through 20 interviews with stakeholders in online advertising, this study looks at how the programmatic advertising industry understands “fake news,” how it conceptualizes and grapples with the use of its tools by hoax publishers to generate revenue, and how its approach to the issue may ultimately contribute to reshaping the financial underpinnings of the digital journalism industry that depends on the same economic infrastructure.
  • The structured backbone of temporal social ties
    • In many data sets, information on the structure and temporality of a system coexists with noise and non-essential elements. In networked systems for instance, some edges might be non-essential or exist only by chance. Filtering them out and extracting a set of relevant connections is a non-trivial task. Moreover, mehods put forward until now do not deal with time-resolved network data, which have become increasingly available. Here we develop a method for filtering temporal network data, by defining an adequate temporal null model that allows us to identify pairs of nodes having more interactions than expected given their activities: the significant ties. Moreover, our method can assign a significance to complex structures such as triads of simultaneous interactions, an impossible task for methods based on static representations. Our results hint at ways to represent temporal networks for use in data-driven models.
  • Brandon RohrerData Science and Robots
  • Physical appt?
  • Working on getting the histories calculated and built
    • Best contracts are: contract 4 = 6, contract 5 = 9,  contract 12 = 10, contract 18 = 140
    • Lots of discussion on how exactly to do this. I think at this point I’m waiting on Heath to pull some new data that I can then export to Excel and play with to see the best way of doing things

Phil 1.14.19

7:00 – 5:00 ASRC NASA

  • Artificial Intelligence in the Age of Neural Networks and Brain Computing
    • Artificial Intelligence in the Age of Neural Networks and Brain Computing demonstrates that existing disruptive implications and applications of AI is a development of the unique attributes of neural networks, mainly machine learning, distributed architectures, massive parallel processing, black-box inference, intrinsic nonlinearity and smart autonomous search engines. The book covers the major basic ideas of brain-like computing behind AI, provides a framework to deep learning, and launches novel and intriguing paradigms as future alternatives.
  • Sent Aaron Mannes the iConference and SASO papers
  • Work on text analytics
    • Extract data by groups, group, user and start looking at cross-correlations
      • Continued modifying post_analyzer.py
      • Commenting out TF-IDF and coherence for a while?
  • Registered for iConference
  • Renew passport!
  • Current thinking on the schema. db_diagram
  • Making progress on the python to write lineitems and prediction history entries
  • Meeting with Don
    • Got most of the paperwork in line and then went over the proposal. I need to make changes to the text based on Don’t suggestions

Phil 1.5.19

It seems to me that this might also be important for validating machine learning models. Getting a critical level for false classification might really help

  • The quest for an optimal alpha
    • Researchers who analyze data within the framework of null hypothesis significance testing must choose a critical “alpha” level, α, to use as a cutoff for deciding whether a given set of data demonstrates the presence of a particular effect. In most fields, α = 0.05 has traditionally been used as the standard cutoff. Many researchers have recently argued for a change to a more stringent evidence cutoff such as α = 0.01, 0.005, or 0.001, noting that this change would tend to reduce the rate of false positives, which are of growing concern in many research areas. Other researchers oppose this proposed change, however, because it would correspondingly tend to increase the rate of false negatives. We show how a simple statistical model can be used to explore the quantitative tradeoff between reducing false positives and increasing false negatives. In particular, the model shows how the optimal α level depends on numerous characteristics of the research area, and it reveals that although α = 0.05 would indeed be approximately the optimal value in some realistic situations, the optimal α could actually be substantially larger or smaller in other situations. The importance of the model lies in making it clear what characteristics of the research area have to be specified to make a principled argument for using one α level rather than another, and the model thereby provides a blueprint for researchers seeking to justify a particular α level.

Working more on A guided tour through a dirt-simple “deep” neural network

jaybookman

femexplore

Phil 1.2.19

Gotta get used to writing dates

7:00 – 5:00 ASRC PhD NASA

  • Continuing my deep neural network writeup
  • Continuing Holt-Winters work with Aaron
  • Continuing to read Clockwork Muse
    • Martindale spends the book talking about poetry, but I’m listening to Kind of Blue right now and I realize that Jazz is similar. The thought leaders are in some state where they are paying attention to each other and not much else. That’s how we get a trajectory that leads to Bitches Brew.
    • I think this is probably a generally applicable pattern. The thing that I need to think through is how a small group of highly creative people in what could be described as an echo-ish chamber differs from mass activity in a large attractor like authoritarianism.
    • We want to know how many dimensions are needed to account for the similarities among poets. Fortunately, once we have correlated all of the poets with one another, a procedure called multidimensional scaling will tell us just this.* Multidimensional scaling tells us that the twenty-one French poets differ along three· main dimensions. These three dimensions account for 94 percent of the similarity matrix. (Page 114)
  • I have spent the day doing PHENOMENALLY STUPID MATH (Holt exponential smoothing with drecksnest damping). Excel file: ExponentialSmoothing2

Phil 12.31.18

7:00 – 4:30 ASRC NASA

  • Set up appt for physical
  • This is fabulous! Seeing Theory
    • Seeing Theory was created by Daniel Kunin while an undergraduate at Brown University. The goal of this website is to make statistics more accessible through interactive visualizations (designed using Mike Bostock’s JavaScript library D3.js).
  • Working on cleaning up and validating my Very Simple Perceptron class. I think I’m going to write the whole thing up as its own blog post
  • Why Trump Reigns as King Cyrus
    • This isn’t the religious right we thought we knew. The Christian nationalist movement today is authoritarian, paranoid and patriarchal at its core. They aren’t fighting a culture war. They’re making a direct attack on democracy itself.

 

Phil 12.29.18

Credibility in Online Social Networks: A Survey

  • The importance of information credibility in society cannot be underestimated given that it is at the heart of all decision-making. Generally, more information is better; however, knowing the value of this information is essential for decision-making processes. Information credibility defines a measure of the fitness of information for consumption. It can also be defined in terms of reliability, which denotes the probability that a data source will appear credible to the users. A challenge in this topic is that there is a great deal of literature that has developed different credibility dimensions. Additionally, information science dealing with online social networks has grown in complexity, attracting interest from researchers in information science, psychology, human-computer interaction, communication studies, and management studies, all of whom have studied the topic from different perspectives. This work will attempt to provide an overall review of the credibility assessment literature over the period 2006–2017 as applied to the context of the microblogging platform, Twitter. Known interpretations of credibility will be examined, particularly as they relate to the Twitter environment. In addition, we investigate levels of credibility assessment features. We then discuss recent works, addressing a new taxonomy of credibility analysis and assessment techniques. At last, a cross-referencing of literature is performed while suggesting new topics for future studies of credibility assessment in social media context.

Phil 12.21.18

7:00 – 4:30 ASRC PhD/NASA/NOAA

  • Spatial Representations in the Human Brain
    • While extensive research on the neurophysiology of spatial memory has been carried out in rodents, memory research in humans had traditionally focused on more abstract, language-based tasks. Recent studies have begun to address this gap using virtual navigation tasks in combination with electrophysiological recordings in humans. These studies suggest that the human medial temporal lobe (MTL) is equipped with a population of place and grid cells similar to that previously observed in the rodent brain. Furthermore, theta oscillations have been linked to spatial navigation and, more specifically, to the encoding and retrieval of spatial information. While some studies suggest a single navigational theta rhythm which is of lower frequency in humans than rodents, other studies advocate for the existence of two functionally distinct delta–theta frequency bands involved in both spatial and episodic memory. Despite the general consensus between rodent and human electrophysiology, behavioral work in humans does not unequivocally support the use of a metric Euclidean map for navigation. Formal models of navigational behavior, which specifically consider the spatial scale of the environment and complementary learning mechanisms, may help to better understand different navigational strategies and their neurophysiological mechanisms. Finally, the functional overlap of spatial and declarative memory in the MTL calls for a unified theory of MTL function. Such a theory will critically rely upon linking task-related phenomena at multiple temporal and spatial scales. Understanding how single cell responses relate to ongoing theta oscillations during both the encoding and retrieval of spatial and non-spatial associations appears to be key toward developing a more mechanistic understanding of memory processes in the MTL.
  • Three Kinds of Spatial Cognition
    • Nora S. Newcombe (Scholar)
    • Spatial cognition is often (but wrongly) conceptualized as a single domain of cognition. However, humans function in more than one way in the spatial world. We navigate, as do all mobile animals, but we also manipulate objects using distinctive hands with opposable thumbs, unlike other species. In fact, an important characteristic of human adaptation is the ability to invent tools. Of course, another central asset is human symbolic ability, which includes the ability to spatialize thought in abstractions such as maps, graphs, and analogies. Thus, there are at least three kinds of spatial cognition with three separable functions. Navigation involves moving around the environment to find food and shelter, and to avoid danger. It draws on several interconnected neural subsystems that track movement and encode the location of external entities with respect to each other and the moving self (i.e., extrinsic coding), and it integrates these inputs to achieve best‐possible estimates. Human navigation is characterized by a great deal of individual variation. Tool use and invention involves the mental representation and transformation of the shapes of objects (i.e., intrinsic coding). It relies on substantially different neural subsystems than navigation. Like navigation, it shows marked individual differences, which are related to variations in learning in science, technology, engineering, and mathematics (STEM). Spatialization is an aspect of human symbolic skill that cuts across multiple cognitive domains and involves many kinds of spatial symbol systems, including language, metaphor, analogy, gesture, sketches, diagrams, graphs, maps, and mental images. These spatial symbol systems are vital to many kinds of learning, including in STEM. Future research on human spatial cognition needs to further delineate the origins, development, neural substrates, variability, and malleability of navigation, tool use, and abstract spatial thinking, as well as their interconnections to each other and to other cognitive skills.
  • A little bit adding to Normal Accidents notes
  • Working on saving out to history and item table
  • Turns out that if you want to retrieve floats from a postgres table using psycopg2, you have to register a custom handler:
    DEC2FLOAT = psycopg2.extensions.new_type(
        psycopg2.extensions.DECIMAL.values,
        'DEC2FLOAT',
        lambda value, curs: float(value) if value is not None else None)
    psycopg2.extensions.register_type(DEC2FLOAT)
  • Learned about dollar-quoting as per https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING
  • To execute big inserts with psycopg2, you need to set autocommit = True
    self.conn = psycopg2.connect(config_str)
    self.conn.autocommit = True
  • And you should use try/catch
    def query_no_result(self, sql: str) -> bool:
        try:
            self.cursor.execute(sql)
            desc = self.cursor.statusmessage
            return True
        except:
            print("unable to execute: {}".format(sql))
            return False

 

Phil 12.20.18

7:00 – 4:00 ASRC NASA/PhD

  • Goal-directed navigation based on path integration and decoding of grid cells in an artificial neural network
    • As neuroscience gradually uncovers how the brain represents and computes with high-level spatial information, the endeavor of constructing biologically-inspired robot controllers using these spatial representations has become viable. Grid cells are particularly interesting in this regard, as they are thought to provide a general coordinate system of space. Artificial neural network models of grid cells show the ability to perform path integration, but important for a robot is also the ability to calculate the direction from the current location, as indicated by the path integrator, to a remembered goal. This paper presents a neural system that integrates networks of path integrating grid cells with a grid cell decoding mechanism. The decoding mechanism detects differences between multi-scale grid cell representations of the present location and the goal, in order to calculate a goal-direction signal for the robot. The model successfully guides a simulated agent to its goal, showing promise for implementing the system on a real robot in the future.
  • Path integration and the neural basis of the ‘cognitive map’
    • Accumulating evidence indicates that the foundation of mammalian spatial orientation and learning is based on an internal network that can keep track of relative position and orientation (from an arbitrary starting point) on the basis of integration of self-motion cues derived from locomotion, vestibular activation and optic flow (path integration).
    • Place cells in the hippocampal formation exhibit elevated activity at discrete spots in a given environment, and this spatial representation is determined primarily on the basis of which cells were active at the starting point and how far and in what direction the animal has moved since then. Environmental features become associatively bound to this intrinsic spatial framework and can serve to correct for cumulative error in the path integration process.
    • Theoretical studies suggested that a path integration system could involve cooperative interactions (attractor dynamics) among a population of place coding neurons, the synaptic coupling of which defines a two-dimensional attractor map. These cells would communicate with an additional group of neurons, the activity of which depends on the conjunction of movement speed, location and orientation (head direction) information, allowing position on the attractor map to be updated by self-motion information.
    • The attractor map hypothesis contains an inherent boundary problem: what happens when the animal’s movements carry it beyond the boundary of the map? One solution to this problem is to make the boundaries of the map periodic by coupling neurons at each edge to those on the opposite edge, resulting in a toroidal synaptic matrix. This solution predicts that, in a sufficiently large space, place cells would exhibit a regularly spaced grid of place fields, something that has never been observed in the hippocampus proper.
    • Recent discoveries in layer II of the medial entorhinal cortex (MEC), the main source of hippocampal afferents, indicate that these cells do have regularly spaced place fields (grid cells). In addition, cells in the deeper layers of this structure exhibit grid fields that are conjunctive for head orientation and movement speed. Pure head direction neurons are also found there. Therefore, all of the components of previous theoretical models for path integration appear in the MEC, suggesting that this network is the core of the path integration system.
    • The scale of MEC spatial firing grids increases systematically from the dorsal to the ventral poles of this structure, in much the same way as is observed for hippocampal place cells, and we show how non-periodic hippocampal place fields could arise from the combination of inputs from entorhinal grid cells, if the inputs cover a range of spatial scales rather than a single scale. This phenomenon, in the spatial domain, is analogous to the low frequency ‘beats’ heard when two pure tones of slightly different frequencies are combined.
    • The problem of how a two-dimensional synaptic matrix with periodic boundary conditions, postulated to underlie grid cell behaviour, could be self-organized in early development is addressed. Based on principles derived from Alan Turing’s theory of spontaneous symmetry breaking in chemical systems, we suggest that topographically organized, grid-like patterns of neural activity might be present in the immature cortex, and that these activity patterns guide the development of the proposed periodic synaptic matrix through a mechanism involving competitive synaptic plasticity.
  • Wormholes in virtual space: From cognitive maps to cognitive graphs
    • Cognitive maps are thought to have a metric Euclidean geometry.
    • Participants learned a non-Euclidean virtual environment with two ‘wormholes’.
    • Shortcuts reveal that spatial knowledge violates metric geometry.
    • Participants were completely unaware of the wormholes and geometric inconsistencies.
    • Results contradict a metric Euclidean map, but support a labelled ‘cognitive graph’.
  • Back to TimeSeriesML
    • Encryption class – done
      • Create a key and save it to file
      • Read a key in from a file into global variable
      • Encrypt a string if there is a key
      • Decrypt a string if there is a key
    • Postgres class – reading part is done
      • Open a global connection and cursor based on a config string
      • Run queries and return success
      • Fetch results of queries as lists of JSON objects