Category Archives: Development

Phil 1.19.18

7:00 – 5:00 ASRC

  • Look! Adversarial Herding: https://twitter.com/katestarbird/status/954802718018686976
  • Reconnected with Wayne. Arranging a time to meet the week of the 29th. Sent him a copy of the winter sim conference paper
  • Continuing with Beyond Individual Choice. Actually, wound up adding a section on how attention and awareness interplay, and how high social trust makes for much more efficient way to approach games such as the prisoner’s dilemma on my thoughts about trust and awareness
  • Starting Angular course
    • Architecture overview
  • Meeting with Jeremy, Heath and Aaron on Project structure/setup
  • More Angular. Yarn requires Python 2.x, which I hope doesn’t break my Python 3.x
  • Could not get the project to serve once built
  • Adversarial herding via The Opposition
    • Clint WattsClint is a consultant and researcher modeling and forecasting threat actor behavior and developing countermeasures for disrupting and defeating state and non-state actors. As a consultant, Clint designs and implements customized training and research programs for military, intelligence and law enforcement organizations at the federal, state and local level. In the private sector, he helps financial institutions develop best practices in cybersecurity intelligence operations. His research predominately focuses on terrorism forecasting and trends seeking to anticipate emerging extremist hotspots and anticipate appropriate counterterrorism responses. More recently, Clint used modeling to outline Russian influence operations via social media and the Kremlin’s return to Active Measures.

Phil 1.18.2018

7:30 – 4:30 ASRC MKT

  • Truth Decay (RAND corporation ebook)
    • An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life
  • Reading more Beyond Individual Choice
    • TheoryDemands
  • Got my Angular setup running. Thanks, Jeremy!
  • Reading up on WSO2 IaaS – Done. Did not know that was a thing.
  • Helped Aaron a bit with his dev box horror show
  • Spent a good chunk of the afternoon jumping through hoops to get an online Angular course approved. It seems as though you get approval, send it to HR(?), buy (it) yourself, then submit the expense through Concur. That’s totally efficient…

Phil 1.17.18

 

7:00 – 3:30 ASRC MKT

  • Harbinger, another DiscussionGame comparable: We are investigating how people make predictions and how to improve forecasting of current events.
  • Working over time, constructing a project based on beliefs and ideas, can be regarded as working with a group of yourself. You communicate with your future self through construction. You perceive your past self through artifacts. Polarization should happen here as a matter of course, since the social similarity (and therefore influence) is very high.
  • Back to Beyond Individual Choice
    • Diagonals
    • Salience
  • Back to Angular – prepping for integration of PolarizationGame into the A2P platform. Speaking of which, there needs to be a REST API that will support registered, (optionally?) identified bots. A bot that is able to persuade a group of people over time to reach a unanimous vote would be an interesting Turing-style test. And a prize
    • Got Tour of Heroes running again, though it seems broken…
  • Nice chat with Jeremy.
    • He’ll talk to Heath about what it would take to set up an A2P instance for the discussion system that could scale to millions of players
    • Also mentioned that there would need to be a REST interface for bots
    • Look through Material Design
      • Don’t see any direct Forum (threaded discussion) details on the home site, but I found this Forum example GIF
    • Add meeting with Heath and Jeremy early in the sprint to lay out initial detailed design
    • Stub out non-functional pages as a deliverable for this (next?) sprint
    • He sent me an email with all the things to set up. Got the new Node, Yarn and CLI on my home machine. Will do that again tomorrow and test the VPN connections
  • Sprint planning
    • A2P GUI and Detailed Design are going to overlap

Phil 1.9.18

7:00 – 4:00 ASRC MKT

  • Submit DC paper – done
  • Add primary goal and secondary goals
  • Add group decision making tool to secondary goals
  • Add site search to “standard” websearch – done
  • Visual Analytics to Support Evidence-Based Decision Making (dissertation)
  • Can Public Diplomacy Survive the internet? Bots, Echo chambers, and Disinformation
    • Shawn Powers serves as the Executive Director of the United States Advisory Commission on Public Diplomacy
    • Markos Kounalakis, Ph.D. is a visiting fellow at the Hoover Institution at Stanford University and is a presidentially appointed member of the J. William Fulbright Foreign Scholarship Board.  Kounalakis is a senior fellow at the Center for Media, Data and Society at Central European University in Budapest, Hungary and president and publisher emeritus of the Washington Monthly. He is currently researching a book on the geopolitics of global news networks.
  • Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election (Harvard)
    • Rob Faris
    • Hal Roberts
    • Bruce Etling
    • Nikki Bourassa 
    • Ethan Zuckerman
    • Yochai Benkler
    • We find that the structure and composition of media on the right and left are quite different. The leading media on the right and left are rooted in different traditions and journalistic practices. On the conservative side, more attention was paid to pro-Trump, highly partisan media outlets. On the liberal side, by contrast, the center of gravity was made up largely of long-standing media organizations steeped in the traditions and practices of objective journalism.

      Our data supports lines of research on polarization in American politics that focus on the asymmetric patterns between the left and the right, rather than studies that see polarization as a general historical phenomenon, driven by technology or other mechanisms that apply across the partisan divide.

      The analysis includes the evaluation and mapping of the media landscape from several perspectives and is based on large-scale data collection of media stories published on the web and shared on Twitter.

Lessons in ML Optimization

One of the “fun” parts of working in ML for someone with a background in software development and not academic research is lots of hard problems remain unsolved. There are rarely defined ways things “must” be done, or in some cases even rules of thumb for doing something like implementing a production capable machine learning system for specific real world problems.

For most areas of software engineering, by the time it’s mature enough for enterprise deployment, it has long since gone through the fire and the flame of academic support, Fortune 50 R&D, and broad ground-level acceptance in the development community. It didn’t take long for distributed computing with Hadoop to be standardized for example. Web security, index systems for search, relational abstraction tiers, even the most volatile of production tier technology, the JavaScript GUI framework goes through periods of acceptance and conformity before most large organizations are trying to roll it out. It all makes sense if you consider the cost of migrating your company from a legacy Struts/EJB3.0 app running on Oracle to the latest HTML5 framework with a Hadoop backend. You don’t want to spend months (or years) investing in a major rewrite to find that its entirely out of date by your release. Organizations looking at these kinds of updates want an expectation of longevity for their dollar, so they invest in mature technologies with clear design rules.

There are companies that do not fall in this category for sure… either small companies who are more agile and can adopt a technology in the short term to retain relevance (or buzzword compliance), who are funded with external research dollars, or who invest money to stay pushing the bleeding edge. However, I think it’s fair to say, the majority of industry and federal customers are looking for stability and cost efficiency from solved technical problems.

Machine Learning is in the odd position of being so tremendously useful in comparison to prior techniques that companies who would normally wait for the dust to settle and development and deployment of these capabilities to become fully commoditized are dipping their toes in. I wrote in a previous post how a lot of the problems with implementing existing ML algorithms boils down to lifecyle, versioning, deployment, security etc., but there is another major factor which is model optimization.

Any engineer on the planet can download a copy of Keras/TensorFlow and a CSV of their organization’s data and smoosh them together until a number comes out. The problem comes when the number takes an eternity to output and is wrong. In addition to understanding the math that allows things like SGD to work for backpropogation or why certain activation functions are more effective in certain situations… one of the jobs for data scientists tuning DNN models is to figure out how to optimize the various buttons and knobs in the model to make it as accurate and performant as possible. Because a lot of this work *isn’t* a commodity yet, it’s a painful learning process of tweaking the data sets, adjusting model design or parameters and rerunning and comparing the results to try and find optimal answers without overfitting. Ironically the task data scientists are doing is one perfectly suited to machine learning. It’s no surprise to me that Google developed AutoML to optimize their own NN development.

 

A number of months ago Phil and I worked on an unsupervised learning task related to organizing high dimensional agents in a medical space. These entities were complex “polychronic” patients with a wide variety of diagnosis and illness. Combined with fields for patient demographic data as well as their full medical claim history we came up with a method to group medically similar patients and look for statistical outliers for indicators of fraud, waste, and abuse. The results were extremely successful and resulted in a lot of recovered money for the customer, but the interesting thing technically was how the solution evolved. Our first prototype used a wide variety of clustering algorithms, value decompositions, non-negative matrix factorization, etc looking for optimal results. All of the selections and subsequent hyperparameters had to be modified by hand, the results evaluated, and further adjustments made.

When it became clear that the results were very sensitive to tiny adjustments, it was obvious that our manual tinkering would miss obvious gradient changes and we implemented an optimizer framework which could evaluate manifold learning techniques for stability and reconstruction error, and the results of the reduction clustered using either a complete fitness landscape walk, a genetic algorithm, or a sub-surface division.

While working on tuning my latest test LSTM for time series prediction, I realized we’re dealing with the same issue here. There is no hard and fast rule for questions like, “How many LSTM Layers should my RNN have?” or “How many LSTM Units should each layer have?”, “What loss function and optimizer work best for this type of data?”, “How much dropout should I apply?”, “Should I use peepholes?”

I kept finding articles during my work saying things like, “There are diminishing returns for more than 4 stacked LSTM layers”. That’s an interesting rule of thumb… what is it based on? The author’s intuition based on the data sets for the particular problems they were experiencing presumably. Some rules of thumb attempted to generate a mathematical relationship between the input data size and complexity and the optimal layout of layers and units. This StackOverflow question has some great responses: https://stackoverflow.com/questions/35520587/how-to-determine-the-number-of-layers-and-nodes-of-a-neural-network

A method recommended by Geoff Hinton is to add layers until you start to overfit your training set. Then you add dropout or another regularization method.

Because so much of what Phil and I do tends towards the generic repeatable solution for real world problems, I suspect we’ll start with some “common wisdom heuristics” and rapidly move towards writing a similar optimizer for supervised problems.

Phil 12.28.12

8:30 – 4:30 ASRC MKT

  • Still sick. Nearing bronchitis?
  • Confessions of a Digital Nazi Hunter
  • Phenotyping of Clinical Time Series with LSTM Recurrent Neural Networks
    • We present a novel application of LSTM recurrent neural networks to multi label classification of diagnoses given variable-length time series of clinical measurements. Our method outperforms a strong baseline on a variety of metrics.
    • Scholar Cited by
      • Mapping Patient Trajectories using Longitudinal Extraction and Deep Learning in the MIMIC-III Critical Care Database
        • Electronic Health Records (EHRs) contain a wealth of patient data useful to biomedical researchers. At present, both the extraction of data and methods for analyses are frequently designed to work with a single snapshot of a patient’s record. Health care providers often perform and record actions in small batches over time. By extracting these care events, a sequence can be formed providing a trajectory for a patient’s interactions with the health care system. These care events also offer a basic heuristic for the level of attention a patient receives from health care providers. We show that is possible to learn meaningful embeddings from these care events using two deep learning techniques, unsupervised autoencoders and long short-term memory networks. We compare these methods to traditional machine learning methods which require a point in time snapshot to be extracted from an EHR.
  • Continuing on white paper
  • Moved the Flocking and Herding paper over to the WSC17 format for editing. Will need to move to the WSC18 format when that becomes available

Phil 12.27.17

8:00 – 4:00 ASRC MKT

  • Granted permission for the CHIIR18 DC.
  • Continuing on white paper. And we’ll see what Aaron has to say about the stampede paper today?
  • It occurs to be that it could make sense to read the trajectories in using the ARFF format. Looks straightforward, though I’d have to output each agent on an axis-by-axis basis. That would in turn mean that we’d have to save each ParticleStatement and save it out .
  • A new optimizer using particle swarm theory (1995)
    • The optimization of nonlinear functions using particle swarm methodology is described. Implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm. Benchmark testing of both paradigms is described, and applications, including neural network training and robot task learning, are proposed. Relationships between particle swarm optimization and both artificial life and evolutionary computation are reviewed.
    • Cited by 12155

Phil 12.19.17

7:00 – 5:00 ASRC MKT

  • Trust, Identity Politics and the Media
    • Essential to a free and functioning democracy is an independent press, a crucial civil society actor that holds government to account and provides citizens access to the impartial information they need to make informed judgments, reason together, exercise their rights and responsibilities, and engage in collective action. In times of crisis, the media fulfills the vital role of alerting the public to danger and connecting citizens to rescue efforts, as Ushahidi has done in Kenya. Or, it can alert the international community to human rights abuses as does Raqqa is Being Slaughtered Silently. But, the very capabilities that allow the media to alert and inform, also allow it to sow division – as it did in Rwanda leading up to and during the genocide– by spreading untruths, and, through “dog whistles,” targeting ethnic groups and inciting violence against them. This panel will focus on two topics: the role of media as a vehicle for advancing or undermining social cohesion, and the use of media to innovate, organize and deepen understanding, enabling positive collective action.
      • Abdalaziz Alhamza, Co-Founder, Raqqa is Being Slaughtered Silently
      • Uzodinma Iweala, CEO and Editor-in-Chief, Ventures Africa; Author, Beasts of No Nation; Producer, Waiting for Hassana (moderator)
      • Ben Rattray, Founder and CEO, Change.org
      • Malika Saada Saar, Senior Counsel on Civil and Human Rights, Google
  • Continuing Consensus and Cooperation in Networked Multi-Agent Systems here Done! Promoted to phlog.
  • An Agent-Based Model of Indirect Minority Influence on Social Change and Diversity
    • The present paper describes an agent-based model of indirect minority influence. It examines whether indirect minority influence can lead to social change as a function of cognitive rebalancing, a process whereby related attitudes are affected when one attitude is changed. An attitude updating algorithm was modelled with minimal assumptions drawing on social psychology theories of indirect minority influence. Results revealed that facing direct majority influence, indirect minority influence along with cognitive rebalancing is a recipe for social change. Furthermore, indirect minority influence promotes and maintains attitudinal diversity in local ingroups and throughout the society. We discuss the findings in terms of social influence theories and suggest promising avenues for model extensions for theory building in minority influence and social change.
  • Ok, time to switch gears and start on the flocking paper. And speaking of which, is this a venue?
    • Winter Simulation Conference 2017 – INFORMS Meetings Browser times out right now, so is it still valid?
    • Created a new LaTex project, since this is a modification of the CHIIR paper and started to slot pieces in. It is *hard* switching gears. Leaving it in the sigchi format for now.
    • I went to change out the echo chamber distance from average with heading from average (which looks way better), but everything was zero in the spreadsheet. After poking around a bit, I was “fixing” the angle cosine to lie on (-1, 1), by forcing it to be 1.0 all the time. Fixed. EchoChamberAngle
  • Sprint planning. I’m on the hook for writing up the mapping white paper and strawman design

Phil 12.18.17

7:15 – 4:15 ASRC MKT

  • I’m having old iPhone problems. Trying a wipe and restart.
  • Exploring the ChestXray14 dataset: problems
    • Interesting article on using tagged datasets. What if the tags are wrong? Something to add to the RB is a random re-introduction of a previously tagged item to see if tagging remains consistent.
  • Continuing Consensus and Cooperation in Networked Multi-Agent Systems here
  • Visualizing the Temporal Evolution of Dynamic Networks (ACM MLG 2011)
    • Many developments have recently been made in mining dynamic networks; however, effective visualization of dynamic networks remains a significant challenge. Dynamic networks are typically visualized via a sequence of static graph layouts. In addition to providing a visual representation of the network topology at each time step, the sequence should preserve the “mental map” between layouts of consecutive time steps to allow a human to interpret the temporal evolution of the network and gain valuable insights that are difficult to convey by summary statistics alone. We propose two regularized layout algorithms for visualizing dynamic networks, namely dynamic multidimensional scaling (DMDS) and dynamic graph Laplacian layout (DGLL). These algorithms discourage node positions from moving drastically between time steps and encourage nodes to be positioned near other members of their group. We apply the proposed algorithms on several data sets to illustrate the benefit of the regularizers for producing interpretable visualizations.
    • These look really straightforward to implement. May be handy in the new flocking paper
  • Opinion and community formation in coevolving networks (Phys Review E)
    • In human societies, opinion formation is mediated by social interactions, consequently taking place on a network of relationships and at the same time influencing the structure of the network and its evolution. To investigate this coevolution of opinions and social interaction structure, we develop a dynamic agent-based network model by taking into account short range interactions like discussions between individuals, long range interactions like a sense for overall mood modulated by the attitudes of individuals, and external field corresponding to outside influence. Moreover, individual biases can be naturally taken into account. In addition, the model includes the opinion-dependent link-rewiring scheme to describe network topology coevolution with a slower time scale than that of the opinion formation. With this model, comprehensive numerical simulations and mean field calculations have been carried out and they show the importance of the separation between fast and slow time scales resulting in the network to organize as well-connected small communities of agents with the same opinion.
  • I can build maps from trajectories of agents through a labeled belief space: mapFromTrajectories
    • This would be analogous to building a map based on terms or topics used by people during multiple group polarization discussion. Densely connected central area where all the discussions begin, sparse ‘outer region’ where the poles live. In this case, you can clearly see the underlying grid that was used to generate the ‘terms’
  • Progress for today. Size is the average time spent ‘over’ a topic/term. Brightness is the number of distinct visitors: mapFromTrajectories2

Phil 12/15/17

9:00 – 1:30 ASRC MKT

  • Looong day yesterday
  • Sprint review
  • This looks like an interesting alternative to blockchain for document security: A Cryptocurrency Without a Blockchain Has Been Built to Outperform Bitcoin
    • The controversial currency IOTA rests on a mathematical “tangle” that its creators say will make it much faster and more efficient to run.
  • Also this: Can AI Win the War Against Fake News?
    • Developers are working on tools that can help spot suspect stories and call them out, but it may be the beginning of an automated arms race. 
    • Mentions adverifai.com
      • FakeRank is like PageRank for Fake News detection, only that instead of links between web pages, the network consists of facts and supporting evidence. It leverages knowledge from the Web with Deep Learning and Natural Language Processing techniques to understand the meaning of a news story and verify that it is supported by facts.

Phil 12.14.17

7:00 – 11:00 ASRC MKT

Phil 12.13.17

7:00 – 5:00 ASRC MKT

  • Schedule physical
  • Write up fire stampede. Done!
  • Continuing Consensus and Cooperation in Networked Multi-Agent Systems here
  • Would like to see how the credibility cues on the document were presented. What went right and what went wrong: Schumer calls cops after forged sex scandal charge
  • Finished linking the RB components to the use cases. Waiting on Aaron to finish SIGINT use case
  • Working on building maps from trajectories. Trying http://graphstream-project.org
    • Updating Labeled2DMatrix to read in string values. I had never finished that part! There are some issues with what to do about column headers. I think I’m going to add explicit headers for the ‘Trajectory’ sheet
  • Strategized with Aaron about how to approach the event tomorrow. And Deep Neural Network Capsules. And Social Gradient Descent Agents.
    • deep neural nets learn by back-propagation of errors over the entire network. In contrast real brains supposedly wire neurons by Hebbian principles: “units that fire together, wire together”. Capsules mimic Hebbian learning in the way that: “A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule”
      • Sure sounds like oscillator frequency locking / flocking to me……

Phil 12.12.17

7:00 – 3:30 ASRC MKT

  • Need to make sure that an amplified agent also has amplified influence in calculating velocity – Fixed
  • Towards the end of this video is an interview with Ian Couzin talking about how mass communication is disrupting our ability to flock ‘correctly’ due to the decoupling of distance and information
  • Write up fire stampede. Backups everywhere, one hole, antennas burn so the AI keeps trust in A* but loses awareness as the antennas burn: “The Los Angeles Police Department asked drivers to avoid navigation apps, which are steering users onto more open routes — in this case, streets in the neighborhoods that are on fire.” [LA Times] Also this slow motion version of the same thing: For the Good of Society — and Traffic! — Delete Your Map App
  • First self-driving car ‘race’ ends in a crash at the Buenos Aires Formula E ePrix; two cars enter, one car survives
  • Taking a closer look at Oscillator Models and Collective Motion (178 Citations) and Consensus and Cooperation in Networked Multi-Agent Systems (6,291 Citations)
  • Consensus and Cooperation in Networked Multi-Agent Systems
    • Reza Olfati-SaberAlex Fax, and Richard M. Murray
    • We discuss the connections between consensus problems in networked dynamic systems and diverse applications including synchronization of coupled oscillators, flocking, formation control, fast consensus in small world networks, Markov processes and gossip-based algorithms, load balancing in networks, rendezvous in space, distributed sensor fusion in sensor networks, and belief propagation. We establish direct connections between spectral and structural properties of complex networks and the speed of information diffusion of consensus algorithms (Abstract)
    • In networks of agents (or dynamic systems), “consensus” means to reach an agreement regarding a certain quantity of interest that depends on the state of all agents. A “consensus algorithm” (or protocol) is an interaction rule that specifies the information exchange between an agent and all of its (nearest) neighbors on the network (pp 215)
      • In my work, this is agreement on heading and velocity
    • Graph Laplacians are an important point of focus of this paper. It is worth mentioning that the second smallest eigenvalue of graph Laplacians called algebraic connectivity quantifies the speed of convergence of consensus algorithms. (pp 216)
    • More recently, there has been a tremendous surge of interest among researchers from various disciplines of engineering and science in problems related to multi-agent networked systems with close ties to consensus problems. This includes subjects such as consensus [26]–[32], collective behavior of flocks and swarms [19], [33]–[37], sensor fusion [38]–[40], random networks [41], [42], synchronization of coupled oscillators [42]–[46], algebraic connectivity of complex networks [47]–[49], asynchronous distributed algorithms [30], [50], formation control for multi-robot systems [51]–[59], optimization-based cooperative control [60]–[63], dynamic graphs [64]–[67], complexity of coordinated tasks [68]–[71], and consensus-based belief propagation in Bayesian networks [72], [73]. (pp 216)
      • That is a dense lit review. How did they order it thematically?
    • A byproduct of this framework is to demonstrate that seemingly different consensus algorithms in the literature [10], [12]–[15] are closely related. (pp 216)
    • To understand the role of cooperation in performing coordinated tasks, we need to distinguish between unconstrained and constrained consensus problems. An unconstrained consensus problem is simply the alignment problem in which it suffices that the state of all agents asymptotically be the same. In contrast, in distributed computation of a function f(z), the state of all agents has to asymptotically become equal to f(z), meaning that the consensus problem is constrained. We refer to this constrained consensus problem as the f-consensus problem. (pp 217)
      • Normal exploring/flocking/stampeding is unconstrained. Herding adds constraint, though it’s dynamic. The variables that have to be manipulated in the case of constraint to result in the same amount of consensus are probably what’s interesting here. Examples could be how ‘loud’ does the herder have to be? Also, how ‘primed’ does the population have to be to accept herding?
    • …cooperation can be informally interpreted as “giving consent to providing one’s state and following a common protocol that serves the group objective.” (pp 217)
    • Formal analysis of the behavior of systems that involve more than one type of agent is more complicated, particularly, in presence of adversarial agents in noncooperative games [79], [80]. (pp 217)
    • The reason matrix theory [81] is so widely used in analysis of consensus algorithms [10], [12], [13], [14], [15], [64] is primarily due to the structure of P in (4) and its connection to graphs. (pp 218)
    • The role of consensus algorithms in particle based flocking is for an agent to achieve velocity matching with respect to its neighbors. In [19], it is demonstrated that flocks are networks of dynamic systems with a dynamic topology. This topology is a proximity graph that depends on the state of all agents and is determined locally for each agent, i.e., the topology of flocks is a state dependent graph. The notion of state-dependent graphs was introduced by Mesbahi [64] in a context that is independent of flocking. (pp 218)
      • They leave out heading alignment here. Deliberate? Or is heading alignment just another variant on velocity
    • Consider a network of decision-making agents with dynamics ẋi = ui interested in reaching a consensus via local communication with their neighbors on a graph G = (V, E). By reaching a consensus, we mean asymptotically converging to a one-dimensional agreement space characterized by the following equation: x1 = x2 = … = x (pp 219)
    • A dynamic graph G(t) = (V, E(t)) is a graph in which the set of edges E(t) and the adjacency matrix A(t) are time-varying. Clearly, the set of neighbors Ni(t) of every agent in a dynamic graph is a time-varying set as well. Dynamic graphs are useful for describing the network topology of mobile sensor networks and flocks [19]. (pp 219)
    • GraphLaplacianGradientDescent(pp 220)
  • algebraic connectivity of a graph: The algebraic connectivity (also known as Fiedler value or Fiedler eigenvalue) of a graph G is the second-smallest eigenvalue of the Laplacian matrix of G.[1] This eigenvalue is greater than 0 if and only if G is a connected graph. This is a corollary to the fact that the number of times 0 appears as an eigenvalue in the Laplacian is the number of connected components in the graph. The magnitude of this value reflects how well connected the overall graph is. It has been used in analysing the robustness and synchronizability of networks. (wikipedia) (pp 220)
  • According to Gershgorin theorem [81], all eigenvalues of L in the complex plane are located in a closed disk centered at delta + 0j with a radius of delta, the maximum degree of a graph (pp 220)
    • This is another measure that I can do of the nomad/flock/stampede structures combined with DBSCAN. Each agent knows what agents it is connected with, and we know how many agents there are. Each agent row should just have the number of agents it is connected to.
  • In many scenarios, networked systems can possess a dynamic topology that is time-varying due to node and link failures/creations, packet-loss [40], [98], asynchronous consensus [41], state-dependence [64], formation reconfiguration [53], evolution [96], and flocking [19], [99]. Networked systems with a dynamic topology are commonly known as switching networks. (pp 226)
  • Conclusion: A theoretical framework was provided for analysis of consensus algorithms for networked multi-agent systems with fixed or dynamic topology and directed information flow. The connections between consensus problems and several applications were discussed that include synchronization of coupled oscillators, flocking, formation control, fast consensus in small-world networks, Markov processes and gossip-based algorithms, load balancing in networks, rendezvous in space, distributed sensor fusion in sensor networks, and belief propagation. The role of “cooperation” in distributed coordination of networked autonomous systems was clarified and the effects of lack of cooperation was demonstrated by an example. It was demonstrated that notions such as graph Laplacians, nonnegative stochasticmatrices, and algebraic connectivity of graphs and digraphs play an instrumental role in analysis of consensus algorithms. We proved that algorithms introduced by Jadbabaie et al. and Fax and Murray are identical for graphs with n self-loops and are both special cases of the consensus algorithm of Olfati-Saber and Murray. The notion of Perron matrices was introduced as the discrete-time counterpart of graph Laplacians in consensus protocols. A number of fundamental spectral properties of Perron matrices were proved. This led to a unified framework for expression and analysis of consensus algorithms in both continuous-time and discrete-time. Simulation results for reaching a consensus in small-worlds versus lattice-type nearest-neighbor graphs and cooperative control of multivehicle formations were presented. (pp 231)
  • Not sure about this one. It just may be another set of algorithms to do flocking. Maybe some network implications? Flocking for Multi-Agent Dynamic Systems: Algorithms and Theory. It is one of the papers that the Consensus and Cooperation paper above leans on heavily though…
  • The Emergence of Consensus: A Primer
    • The origin of population-scale coordination has puzzled philosophers and scientists for centuries. Recently, game theory, evolutionary approaches and complex systems science have provided quantitative insights on the mechanisms of social consensus. However, the literature is vast and scattered widely across fields, making it hard for the single researcher to navigate it. This short review aims to provide a compact overview of the main dimensions over which the debate has unfolded and to discuss some representative examples. It focuses on those situations in which consensus emerges ‘spontaneously’ in absence of centralised institutions and covers topic that include the macroscopic consequences of the different microscopic rules of behavioural contagion, the role of social networks, and the mechanisms that prevent the formation of a consensus or alter it after it has emerged. Special attention is devoted to the recent wave of experiments on the emergence of consensus in social systems.
  • Critical dynamics in population vaccinating behavior
    • Complex adaptive systems exhibit characteristic dynamics near tipping points such as critical slowing down (declining resilience to perturbations). We studied Twitter and Google search data about measles from California and the United States before and after the 2014–2015 Disneyland, California measles outbreak. We find critical slowing down starting a few years before the outbreak. However, population response to the outbreak causes resilience to increase afterward. A mathematical model of measles transmission and population vaccine sentiment predicts the same patterns. Crucially, critical slowing down begins long before a system actually reaches a tipping point. Thus, it may be possible to develop analytical tools to detect populations at heightened risk of a future episode of widespread vaccine refusal.
  • For Aaron’s Social Gradient Descent Agent research (lit review)
    • On distributed search in an uncertain environment (Something like Social Gradient Descent Agents)
      • The paper investigates the case where N agents solve a complex search problem by communicating to each other their relative successes in solving the task. The problem consists in identifying a set of unknown points distributed in an n–dimensional space. The interaction rule causes the agents to organize themselves so that, asymptotically, each agent converges to a different point. The emphasis of this paper is on analyzing the collective dynamics resulting from nonlinear interactions and, in particular, to prove convergence of the search process.
    • A New Clustering Algorithm Based Upon Flocking On Complex Network (Sizing and timing for flocking systems seems to be ok?)
      • We have proposed a model based upon flocking on a complex network, and then developed two clustering algorithms on the basis of it. In the algorithms, firstly a k-nearest neighbor (knn) graph as a weighted and directed graph is produced among all data points in a dataset each of which is regarded as an agent who can move in space, and then a time-varying complex network is created by adding long-range links for each data point. Furthermore, each data point is not only acted by its k nearest neighbors but also r long-range neighbors through fields established in space by them together, so it will take a step along the direction of the vector sum of all fields. It is more important that these long-range links provides some hidden information for each data point when it moves and at the same time accelerate its speed converging to a center. As they move in space according to the proposed model, data points that belong to the same class are located at a same position gradually, whereas those that belong to different classes are away from one another. Consequently, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the rates of convergence of clustering algorithms are fast enough. Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach.
  • Done with the first draft of the white paper! And added the RFP section to the LMN productization version
  • Amazon Sage​Maker: Amazon SageMaker is a fully managed machine learning service. With Amazon SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment. It provides an integrated Jupyter authoring notebook instance for easy access to your data sources for exploration and analysis, so you don’t have to manage servers. It also provides common machine learning algorithms that are optimized to run efficiently against extremely large data in a distributed environment. With native support for bring-your-own-algorithms and frameworks, Amazon SageMaker offers flexible distributed training options that adjust to your specific workflows. Deploy a model into a secure and scalable environment by launching it with a single click from the Amazon SageMaker console. Training and hosting are billed by minutes of usage, with no minimum fees and no upfront commitments. (from the documentation)

4:00 – 5:00 Meeting with Aaron M. to discuss Academic RB wishlist.

Phil 12.4.17

7:00 – ASRC MKT

3:00 – Campus

  • Fika
  • Meeting w/Wayne
    • Up to date. He was a bit worried that I might be going off the rails with the Neural Coupling work, but relaxed when I showed how it was being used to buttress the flocking model. And I have access to an fMRI, it seems…
    • Information Ecologies – The common rhetoric about technology falls into two extreme categories: uncritical acceptance or blanket rejection. Claiming a middle ground, Bonnie Nardi and Vicki O’Day call for responsible, informed engagement with technology in local settings, which they call information ecologies.An information ecology is a system of people, practices, technologies, and values in a local environment. Nardi and O’Day encourage the reader to become more aware of the ways people and technology are interrelated. They draw on their empirical research in offices, libraries, schools, and hospitals to show how people can engage their own values and commitments while using technology.
  • Bonus meeting with Shimei. Rambled through the following topics
    • Reinforcement learning with flocks and gradient descent
    • Flocking, herding and social engineering
    • Suspicious OS
    • She has a tall son 🙂

Phil 11.26.17

User experience design for APIs

  • Writing code is rarely just a private affair between you and your computer. Code is not just meant for machines; it has human users. It is meant to be read by people, used by other developers, maintained and built upon. Developers who produce better code, in greater quantity, when they are kept happy and productive, working with tools they love. Developers who unfortunately are often being let down by their tools, and left cursing at obscure error messages, wondering why that stupid library doesn’t do what they thought it would. Our tools have great potential to cause us pain, especially in a field as complex as software engineering.