Category Archives: Writing

Phil 10.22.18

7:00 – ASRC PhD

Phil 10.19.18

Phil 7:00 – 3:30 ASRC PhD

  • Sprint review
  • Reading Meltdown: Why our systems fail and What we can do about it, and I found some really interesting work that relates to social conformity, flocking, stampeding and nomadic behaviors:
    • We show that a deviation from the group opinion is regarded by the brain as a punishment,” said the study’s lead author, Vasily Klucharev. And the error message combined with a dampened reward signal produces a brain impulse indicating that we should adjust our opinion to match the consensus. Interestingly, this process occurs even if there is no reason for us to expect any punishment from the group. As Klucharev put it, “This is likely an automatic process in which people form their own opinion, hear the group view, and then quickly shift their opinion to make it more compliant with the group view.” (Page 154)
      • Reinforcement Learning Signal Predicts Social Conformity
        • Vasily Klucharev
        • We often change our decisions and judgments to conform with normative group behavior. However, the neural mechanisms of social conformity remain unclear. Here we show, using functional magnetic resonance imaging, that conformity is based on mechanisms that comply with principles of reinforcement learning. We found that individual judgments of facial attractiveness are adjusted in line with group opinion. Conflict with group opinion triggered a neuronal response in the rostral cingulate zone and the ventral striatum similar to the “prediction error” signal suggested by neuroscientific models of reinforcement learning. The amplitude of the conflict-related signal predicted subsequent conforming behavioral adjustments. Furthermore, the individual amplitude of the conflict-related signal in the ventral striatum correlated with differences in conforming behavior across subjects. These findings provide evidence that social group norms evoke conformity via learning mechanisms reflected in the activity of the rostral cingulate zone and ventral striatum.
    • When people agreed with their peers’ incorrect answers, there was little change in activity in the areas associated with conscious decision-making. Instead, the regions devoted to vision and spatial perception lit up. It’s not that people were consciously lying to fit in. It seems that the prevailing opinion actually changed their perceptions. If everyone else said the two objects were different, a participant might have started to notice differences even if the objects were identical. Our tendency for conformity can literally change what we see. (Page 155)
      • Gregory Berns
        • Dr. Berns specializes in the use of brain imaging technologies to understand human – and now, canine – motivation and decision-making.  He has received numerous grants from the National Institutes of Health, National Science Foundation, and the Department of Defense and has published over 70 peer-reviewed original research articles.
      • Neurobiological Correlates of Social Conformity and Independence During Mental Rotation
        • Background

          When individual judgment conflicts with a group, the individual will often conform his judgment to that of the group. Conformity might arise at an executive level of decision making, or it might arise because the social setting alters the individual’s perception of the world.

          Methods

          We used functional magnetic resonance imaging and a task of mental rotation in the context of peer pressure to investigate the neural basis of individualistic and conforming behavior in the face of wrong information.Results

          Conformity was associated with functional changes in an occipital-parietal network, especially when the wrong information originated from other people. Independence was associated with increased amygdala and caudate activity, findings consistent with the assumptions of social norm theory about the behavioral saliency of standing alone.

          Conclusions

          These findings provide the first biological evidence for the involvement of perceptual and emotional processes during social conformity.

        • The Pain of Independence: Compared to behavioral research of conformity, comparatively little is known about the mechanisms of non-conformity, or independence. In one psychological framework, the group provides a normative influence on the individual. Depending on the particular situation, the group’s influence may be purely informational – providing information to an individual who is unsure of what to do. More interesting is the case in which the individual has definite opinions of what to do but conforms due to a normative influence of the group due to social reasons. In this model, normative influences are presumed to act through the aversiveness of being in a minority position
      • A Neural Basis for Social Cooperation
        • Cooperation based on reciprocal altruism has evolved in only a small number of species, yet it constitutes the core behavioral principle of human social life. The iterated Prisoner’s Dilemma Game has been used to model this form of cooperation. We used fMRI to scan 36 women as they played an iterated Prisoner’s Dilemma Game with another woman to investigate the neurobiological basis of cooperative social behavior. Mutual cooperation was associated with consistent activation in brain areas that have been linked with reward processing: nucleus accumbens, the caudate nucleus, ventromedial frontal/orbitofrontal cortex, and rostral anterior cingulate cortex. We propose that activation of this neural network positively reinforces reciprocal altruism, thereby motivating subjects to resist the temptation to selfishly accept but not reciprocate favors.
  • Working on Antonio’s paper. I think I’ve found the two best papers to use for the market system. It turns out that freight has been doing this for about 20 years. Agent simulation and everything

Phil 10.18.18

7:00 – 9:00, 12:00 – ASRC PhD

  • Reading the New Yorker piece How Russia Helped Swing the Election for Trump, about Kathleen Hall Jamieson‘s book Cyberwar: How Russian Hackers and Trolls Helped Elect a President—What We Don’t, Can’t, and Do Know. Some interesting points with respect to Adversarial Herding:
    • Jamieson’s Post article was grounded in years of scholarship on political persuasion. She noted that political messages are especially effective when they are sent by trusted sources, such as members of one’s own community. Russian operatives, it turned out, disguised themselves in precisely this way. As the Times first reported, on June 8, 2016, a Facebook user depicting himself as Melvin Redick, a genial family man from Harrisburg, Pennsylvania, posted a link to DCLeaks.com, and wrote that users should check out “the hidden truth about Hillary Clinton, George Soros and other leaders of the US.” The profile photograph of “Redick” showed him in a backward baseball cap, alongside his young daughter—but Pennsylvania records showed no evidence of Redick’s existence, and the photograph matched an image of an unsuspecting man in Brazil. U.S. intelligence experts later announced, “with high confidence,” that DCLeaks was the creation of the G.R.U., Russia’s military-intelligence agency.
    • Jamieson argues that the impact of the Russian cyberwar was likely enhanced by its consistency with messaging from Trump’s campaign, and by its strategic alignment with the campaign’s geographic and demographic objectives. Had the Kremlin tried to push voters in a new direction, its effort might have failed. But, Jamieson concluded, the Russian saboteurs nimbly amplified Trump’s divisive rhetoric on immigrants, minorities, and Muslims, among other signature topics, and targeted constituencies that he needed to reach. 
  • Twitter released IRA dataset (announcement, archive), and Kate Starbird’s group has done some preliminary analysis
  • Need to do something about the NESTA Call for Ideas, which is due “11am on Friday 9th November
  • Continuing with Market-Oriented Programming
    • Some thoughts on what the “cost” for a trip can reference
      • Passenger
        • Ticket price
          • provider: Current price, refundability, includes taxes
            • carbon
            • congestion
            • other?
          • consumer: Acceptable range
        • Travel time
        • Departure time
        • Arrival time (plus arrival time confidence)
        • comfort (legroom, AC)
        • Number of stops (related to convenience)
        • Number of passengers
        • Time to wait
        • Externalities like airport security, which adds +/- 2 hours to air travel
      • Cargo
        • Divisibility (ship as one or more items)
        • Physical state for shipping (packaged, indivisible solid, fluid, gas)
          • Waste to food grade to living (is there a difference between algae and cattle? Pets? Show horses?
          • Refrigerated/heated
          • Danger
          • Stability/lifespan
          • weight
      • Aggregators provide simpler combinations of transportation options
    • Any exchange that supports this format should be able to participate. Additionally, each exchange should contain a list of other exchanges that a consumer can request, so we don’t need another level of hierarchy. Exchanges could rate other exchanges as a quality measure
      • It also occurs to me that there could be some kind of peer-to-peer or mesh network for degraded modes. A degraded mode implies a certain level of emergency, which would affect the (now small-scale) allocation of resources.
    • Some stuff about Mobility as a Service. Slide deck (from Canada Intelligent Transportation Service), and an app (Whim)
  • PSC AI/ML working group 9:00 – 12:00, plus writeup
    • PSC will convene a working group meeting on Thursday, Oct. 18 from 9am – 10am to identify actions and policy considerations related to advancing the use of AI solutions in government. Come prepared to share your ideas and experience. We would welcome your specific feedback on these questions:
      • How can PSC help make the government a “smarter buyer” when it comes to AI/ML?
      • How are agencies effectively using AI/ML today?
      • In what other areas could these technologies be deployed in government today?
        • Looking for bad sensors on NOAA satellites
      • What is the current federal market and potential future market for AI/ML?
      • Notes:
        • How to help our members – federal contracts. Help make the federal market frictionless
        • Kevin – SmartForm? What are the main gvt concerns? Is it worry about False positives?
          • Competitiveness – no national strategy
          • Appropriate use, particularly law enforcement
          • Robotic Process Automation (RPA) Security, Compliancy, and adoption. Compliancy testing.
          • Data trust. Humans make errors. When ML makes the same errors, it’s worse.
        • A system that takes time to get accurate watching people perform is not the kind of system that the government can buy.
          • This implies that there has to be immediate benefit, and can have the possibility of downstream benefit.
        • Dell would love to participate (in what?) Something about cloud
        • Replacing legacy processes with better approaches
        • Fedramp-like compliance mechanism for AI. It is a requirement if it is a cloud service.
        • Perceived, implicit bias is the dominant narrative on the government side. Specific applications like facial recognition
        • Take a look at all the laws that might affect AI, to see how the constraints are affecting adoption/use with an eye towards removing barriers
        • Chris ?? There isn’t a very good understanding or clear linkage between the the promise and the current problems, such as staffing, tagged data, etc
        • What does it mean to be reskilled and retrained in an AI context?
        • President’s Management Agenda
        • The killer app is cost savings, particularly when one part of government is getting a better price than another part.
        • Federal Data Strategy
        • Send a note to Kevin about data availability. The difference between NOAA sensor data (clean and abundant), vs financial data, constantly changing spreadsheets that are not standardized. Maybe the creation of tools that make it easier to standardize data than use artisanal (usually Excel-based) solutions. Wrote it up for Aaron to review. It turned out to be a page.

Phil 10.17.18

7:00 – 4:00 Antonio Workshop

Phil 10.16.18

7:00 – 4:00 ASRC DARPA

  • Steve had some good questions about quantitative measures:
    • I think there are some good answers that we can provide here on determining the quality of maps. The number of users is an educated guess though. In my simulations, I can generate enough information to create maps using about 100 samples per agent. I’m working on a set of experiments that will produce “nosier” data that will provide a better estimate, but that won’t be ready until December. So we can say that “simulations indicate that approximately 100 users will have to interact through a total of 100 threaded posts to produce meaningful maps”
    • With respect to the maps themselves, we can determine quality in four ways. The mechanism for making this comparison will be bootstrap sampling (https://en.wikipedia.org/wiki/Bootstrapping_(statistics)), which is an extremely effective way of comparing two unknown distributions. In our case, the distribution will be the coordinate of each topic in the embedding space.
      1. Repeatability: Can multiple maps generated on the same data set be made to align? Embedding algorithms often start with random values. As such embeddings that are similar may appear different because they have different orientations. To determine similarity we would apply a least-squares transformation of one map with respect to the other. Once complete, we would expect a greater than 90% match between the two maps in success.
      2. Resolution: What is the smallest level of detail that can be rendered accurately? We will be converting words into topics and then placing the topics in an embedding space. As described in the document, we expect to do this with Non-Negative Matrix Factorization (NMF). If we factor the all discussions down to a single topic (i.e. “words”), then we will have a single point map that can always be rendered with 100% repeatability, but it has 0% precision. If, on the other hand, we can place every word in every discussion on the map, but the relationships are different every time, then we can have 100% precision, but 0% repeatability. As we cluster terms together, we need to compare repeated runs to see that we get similar clusters each time. We need to find the level of abstraction that will give us a high level of repeatability. A 90% match is our expectation.
      3. Responsiveness: Maps change over time. A common example is a weather map, though political maps shift borders and physical maps reflect geographic activity like shoreline erosion. This duration may reflect the accuracy of the map, with slow change happening across large scales while rapid changes are visible at higher resolutions. A change at the limit of resolution should ideally be reflected immediately in the map and not adjust the surrounding areas.
  • More frantic flailing to meet the deadline. DONE!!!

4:00 – 5:30 Antonio Workshop

Phil 10.15.18

7:00 – ASRC BD

  • Heard about some interesting things this morning on BBC Business Daily – Is the Internet Fit for Purpose?:
    • Future in Review Conference: The leading global conference on the intersection of technology and the economy. New partnerships, projects, and plans you can’t afford to miss. If your success depends on having an accurate view of the future, or you’d like to meet others who are able and motivated to forge action-based alliances, this is the most important conference you will attend. Be one of the thought leaders in the FiRe conversation, analyzing and creating the future of technology, economics, pure science, the environment, genomics, education, and more.
    • Berit Anderson. Created the science fact/fiction magazine Scout, which, interestingly enough, has a discussion space for JuryRoom-style questions
  • More DARPA proposal

Phil 8.11.18

7:00 – ASRC BD

  • Finishing up notes on the Evolution of Cooperation
  • More proposal writing. Come at it from the creation of belief space maps, the benefits they provide in uncertain information (prediction, known topography, etc), what it takes to create them, and how they integrate with GIS
  • 10:00 – 12:00 Will’s proposal
  • haircut!
  • 4:00 flu shot
  • plot2In this codebook we will investigate the macro-structure of philosophical literature. As a base for our investigation I have collected about fifty-thousand records from the Web of Science collection, spanning from the late forties to this very day.

Phil 10.10.18

7:00 – 4:30 ASRC BD

  • Starting to add content to the proposal. Going to put together a section on game theory that ties together Beyond Individual Choice, The Evolution of Cooperation, and Consensus and Cooperation in Networked Multi-Agent Systems
    • Got some good writing done, but didn’t upload!
  • And also, voter influencing from this post
  • And I just saw this! Structure of Decision: The Cognitive Maps of Political Elites. It’s another book by Robert Axelrod. Ordered.
  • Putting together my notes on the Evolution of Cooperation. Can’t believe I haven’t done that yet
  • Got a good response from Antonio. Need to respond
  • Found some good stuff on market-oriented programming for Antonio’s workshop. The person who seems to really own this space is Michael Wellman (Scholar). Downloaded several of his papers.
  • From Benjamin Schmidt, via Twitter:
    • I have a new article in CA on dimensionality reduction for digital libraries. . Let me walk through one figure, Eames power-of-10 style, that shows a machine clustering of all 14 million books in the collection–including most of the books you’ve read.
    • 1-hathi_zoom_marked-683x1024 This is very close to mapping as I understand it. There is the ability to zoom in and out at different levels of structure. His repo is here, but its for [R]
    • Random Projection in Scikit-learn
    • Here’s the paper its based on: Visualizing Large-scale and High-dimensional Data
      • We study the problem of visualizing large-scale and high-dimensional data in a low-dimensional (typically 2D or 3D) space. Much success has been reported recently by techniques that first compute a similarity structure of the data points and then project them into a low-dimensional space with the structure preserved. These two steps suffer from considerable computational costs, preventing the state-of-the-art methods such as the t-SNE from scaling to large-scale and high-dimensional data (e.g., millions of data points and hundreds of dimensions). We propose the LargeVis, a technique that first constructs an accurately approximated K-nearest neighbor graph from the data and then layouts the graph in the low-dimensional space. Comparing to t-SNE, LargeVis significantly reduces the computational cost of the graph construction step and employs a principled probabilistic model for the visualization step, the objective of which can be effectively optimized through asynchronous stochastic gradient descent with a linear time complexity. The whole procedure thus easily scales to millions of high-dimensional data points. Experimental results on real-world data sets demonstrate that the LargeVis outperforms the state-of-the-art methods in both efficiency and effectiveness. The hyper-parameters of LargeVis are also much more stable over different data sets.

Phil 10.9.18

7:00 – 4:00 ASRC BD

  • Drive to work in Tesla. Ride to pick up Porsche lunch-ish. Drive home with bike. Ride to work. Drive home with bike. Who knew that the Towers of Hanoi would be such practical training?
  • Finish Antonio response and send it off. I think it needs a discussion of the structure of the paper and who is responsible for which section to be complete.
  • Artificial Intelligence and Social Simulation: Studying Group Dynamics on a Massive Scale
    • Recent advances in artificial intelligence and computer science can be used by social scientists in their study of groups and teams. Here, we explain how developments in machine learning and simulations with artificially intelligent agents can help group and team scholars to overcome two major problems they face when studying group dynamics. First, because empirical research on groups relies on manual coding, it is hard to study groups in large numbers (the scaling problem). Second, conventional statistical methods in behavioral science often fail to capture the nonlinear interaction dynamics occurring in small groups (the dynamics problem). Machine learning helps to address the scaling problem, as massive computing power can be harnessed to multiply manual codings of group interactions. Computer simulations with artificially intelligent agents help to address the dynamics problem by implementing social psychological theory in data-generating algorithms that allow for sophisticated statements and tests of theory. We describe an ongoing research project aimed at computational analysis of virtual software development teams.
    • This appears to be a simulation/real world project that models GitHub groups
  • Continue BAA work? I need to know what Matt’s found out about the topic.
    • Some good discussion. Got his email of notes from his meeting with Steve
    • Created a “Disruptioneering technical” template
    • Copied template and stated filling in sections for technical
  • DARPA announced its new initiative, AI Next, which will invest $2 billion in AI R&D to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.” Since fiscal 2017, DARPA has stepped up its investment in artificial intelligence by almost 50 percent, from $307 million to $448 million.
  • DARPA’s move follows the Pentagon’s June decision to launch a $1.7 billion Joint Artificial Intelligence Center, or JAIC (pronounced “Jake”), to promote collaboration on AI-related R&D among military service branches, the private sector, and academia. The challenge is to transform relatively smaller contracts and some prototype systems development into large scale field deployment.

Phil 10.8.18

7:00 – 12:00, 2:00 – 5:00 ASRC Research

  • Finish up At Home in the Universe notes – done!
  • Get started on framing out Antonio’s paper – good progress!
    • Basically, Aaron and I think there is a spectrum of interaction that can occur in these systems. At one end is some kind of market, where communication is mediated through price, time, and convenience to the transportation user. At the other is a more top down, control system way of dealing with this. NIST RCS would be an example of this. In between these two extremes are control hierarchies that in turn interact through markets
  • Wrote up some early thoughts on how simulation and machine learning can be a thinking fast and slow solution to understandable AI

Phil 10.5.18

7:00 – 5:00 ASRC MKT

  • Seasucker.com for roof racks?
  • Continuing to write up notes from At Home in the Universe.
  • Discussion with Matt about the deliverables for the Serial Interactions in Imperfect Information Games Applied to Complex Military Decision-Making (SI3-CMD) request
  • DARPA proposal template
  • Resized images for Aaron
  • Biodiversity and the Balance of Nature
    • As we destroy biological diversity, what else are we doing to the environment, what is being changed, and how will those changes affect us? One part of the answers to these questions is provided by Lawton and Brown’s consideration of redundancy (Chap. 12). Part of what the environment does for us involves “ecosystem services” — the movement of energy and nutrients through the air, water, and land, and through the food chains (Ehrlich, Foreword). Just how much biological diversity we need to keep the movement at approximately natural levels is a question of critical importance. Nonetheless, it is not a question that is commonly asked. A major synthesis of theories on the dynamics of nutrient cycling (DeAngelis 1991) devotes little space to the consequences of changes in the numbers of species per trophic level: It is the number of trophic levels that receives the attention. One might well conclude that, over broad limits, ecosystem services will continue to be provided, so long as there are some plants, some animals, some decomposers, and so on. Lawton and Brown conclude that numerous species are redundant.

Phil 10.4.18

7:00 – 5:30 ASRC MKT

  • Join PCA! Write classified! Done
  • There are 56 work days until Jan 1. My 400 hours is 50 days. So I go full time on research around the 22nd.
  • Got a note from Wayne saying that there were 25 blue sky papers and 3 slots. THat might me expanded to 6 slots
  • Write up notes on “At Home in the Universe” – started
  • Finish speaking notes for BAA – Done
  • Matt found a couple of things that might be good. One is due on October 16th, which is waaaaaaaaaaaaayyyyyyyy too tight.
  • Looked at the Health.mil Connected Health clearinghouse effort and website. It sounds a lot like a military version of PubMed, with the ability to request reports on demand, plus some standardized reports as well. These reports seem to source back to other agencies like the CDC, with external SMEs.

Phil 10.2.18

7:00 – 5:00 ASRC Research

  • Graph laplacian dissertation
    • The spectrum of the normalized graph Laplacian can reveal structural properties of a network and can be an important tool to help solve the structural identification problem. From the spectrum, we attempt to develop a tool that helps us to understand the network structure on a deep level and to identify the source of the network to a greater extent. The information about different topological properties of a graph carried by the complete spectrum of the normalized graph Laplacian is explored. We investigate how and why structural properties are reflected by the spectrum and how the spectrum changes when compairing different networks from different sources.
  • Universality classes in nonequilibrium lattice systems
    • This article reviews our present knowledge of universality classes in nonequilibrium systems defined on regular lattices. The first section presents the most important critical exponents and relations, as well as the field-theoretical formalism used in the text. The second section briefly addresses the question of scaling behavior at first-order phase transitions. In Sec. III the author looks at dynamical extensions of basic static classes, showing the effects of mixing dynamics and of percolation. The main body of the review begins in Sec. IV, where genuine, dynamical universality classes specific to nonequilibrium systems are introduced. Section V considers such nonequilibrium classes in coupled, multicomponent systems. Most of the known nonequilibrium transition classes are explored in low dimensions between active and absorbing states of reaction-diffusion-type systems. However, by mapping they can be related to the universal behavior of interface growth models, which are treated in Sec. VI. The review ends with a summary of the classes of absorbing-state and mean-field systems and discusses some possible directions for future research.
  • “The Government Spies Using Our Webcams:” The Language of Conspiracy Theories in Online Discussions
    • Conspiracy theories are omnipresent in online discussions—whether to explain a late-breaking event that still lacks official report or to give voice to political dissent. Conspiracy theories evolve, multiply, and interconnect, further complicating efforts to limit their propagation. It is therefore crucial to develop scalable methods to examine the nature of conspiratorial discussions in online communities. What do users talk about when they discuss conspiracy theories online? What are the recurring elements in their discussions? What do these elements tell us about the way users think? This work answers these questions by analyzing over ten years of discussions in r/conspiracy—an online community on Reddit dedicated to conspiratorial discussions. We focus on the key elements of a conspiracy theory: the conspiratorial agents, the actions they perform, and their targets. By computationally detecting agent–action–target triplets in conspiratorial statements, and grouping them into semantically coherent clusters, we develop a notion of narrative-motif to detect recurring patterns of triplets. For example, a narrative-motif such as “governmental agency–controls–communications” appears in diverse conspiratorial statements alleging that governmental agencies control information to nefarious ends. Thus, narrative-motifs expose commonalities between multiple conspiracy theories even when they refer to different events or circumstances. In the process, these representations help us understand how users talk about conspiracy theories and offer us a means to interpret what they talk about. Our approach enables a population-scale study of conspiracy theories in alternative news and social media with implications for understanding their adoption and combating their spread
  • Need to upload to ArXiv (try multiple tex files) – done!Arxiv
  • If I’m charging my 400 hours today, then start putting together text prediction. I’d like to try the Google prediction series to see what happens. Otherwise, there are two things I’d like to try with LSTMs, since they take 2 coordinates as inputs
    • Use a 2D embedding space
    • Use NLP to get a parts-of-speech (PoS) analysis of the text so that there can be a (PoS, Word) coordinate.
    • Evaluate the 2 approaches on their ability to converge?
  • Coordinating with Antonio about workshops. It’s the 2019 version of this: International Workshop on Massively Multi-Agent Systems (MMAS2018) in conjunction with IJCAI/ECAI/AAMAS/ICML 2018

Phil 10.1.18

7:00 – 8:30 ASRC MKT?

  • Last Friday, Aaron was told by division leadership (Mike M) that R&D is being terminated as of Jan 1st and to get on billable projects. This is going against our impression of how things were going, so it’s unclear what will actually happen. So I’m not looking for a job just yet… Personally, I blame putting a deposit down on this: Tesla3
  • This looks interesting:
    • Launched in October 2015 by founding editor Robert Kadar with support from Joe Brewer, David Sloan Wilson, The Evolution Institute, and Steve Roth — who now serves as publisher — Evonomics has emerged as a powerful voice for the sea change that is sweeping through economics.
  • Working my way through At Home in the Universe
    • Fontana Lab
      • Molecular biology offers breathtaking views of the parts and processes that undergird life and its evolution. It is vexing, then, that we seem unable to analytically grasp the principles that would make the nature of cellular phenotypes more intelligible and their control more deliberate. One can always blame insufficient knowledge, but we also entertain the idea that physics and chemistry need formal and conceptual enrichment from computer science to become an appropriate foundation for systems biology. This view arises from the belief that computation is a natural phenomenon, like gravity or boiling water. We need adequate formalisms and models to reason about computation in the wild.This view guides many of our lab’s interests, which span the development and application of rule-based formalisms for modeling complex systems of molecular interaction, causality in concurrent systems, the interplay between network growth and network dynamics, phenotypic plasticity and evolvability, learning, and aging. Our approach is computational and theoretical. In the past we also conducted experimental work using C. elegans as a model system. Outside collaborations are essential to our group. The size of our team can fluctuate considerably, as we chase grants in pursuit of our passions, not opportunistically. Read more about our research.
  • Due date for the iConference Paper. Submitted last night just to be safe, but I expect to tweak today.
    • incorporating Wayne’s changes
    • Final push with Wayne on campus
    • Done! Submitted
    • Need to upload to ArXive (try multiple tex files)
  • From The Atlantic – stampede end condition:
    • It is impossible at this moment to envisage the Republican Party coming back. Like a brontosaurus with some brain-eating disorder it might lumber forward in the direction dictated by its past, favoring deregulation of businesses here and standing up to a rising China there, but there will be no higher mental functioning at work. And so it will plod into a future in which it is detested in a general way by women, African Americans, recent immigrants, and the educated young as well as progressives pure and simple. It might stumble into a political tar pit and cease to exist or it might survive as a curious, decaying relic of more savage times and more primitive instincts, lashing out and crushing things but incapable of much else.