Category Archives: Lit Review

Phil 10.18.18

7:00 – 9:00, 12:00 – ASRC PhD

  • Reading the New Yorker piece How Russia Helped Swing the Election for Trump, about Kathleen Hall Jamieson‘s book Cyberwar: How Russian Hackers and Trolls Helped Elect a President—What We Don’t, Can’t, and Do Know. Some interesting points with respect to Adversarial Herding:
    • Jamieson’s Post article was grounded in years of scholarship on political persuasion. She noted that political messages are especially effective when they are sent by trusted sources, such as members of one’s own community. Russian operatives, it turned out, disguised themselves in precisely this way. As the Times first reported, on June 8, 2016, a Facebook user depicting himself as Melvin Redick, a genial family man from Harrisburg, Pennsylvania, posted a link to DCLeaks.com, and wrote that users should check out “the hidden truth about Hillary Clinton, George Soros and other leaders of the US.” The profile photograph of “Redick” showed him in a backward baseball cap, alongside his young daughter—but Pennsylvania records showed no evidence of Redick’s existence, and the photograph matched an image of an unsuspecting man in Brazil. U.S. intelligence experts later announced, “with high confidence,” that DCLeaks was the creation of the G.R.U., Russia’s military-intelligence agency.
    • Jamieson argues that the impact of the Russian cyberwar was likely enhanced by its consistency with messaging from Trump’s campaign, and by its strategic alignment with the campaign’s geographic and demographic objectives. Had the Kremlin tried to push voters in a new direction, its effort might have failed. But, Jamieson concluded, the Russian saboteurs nimbly amplified Trump’s divisive rhetoric on immigrants, minorities, and Muslims, among other signature topics, and targeted constituencies that he needed to reach. 
  • Twitter released IRA dataset (announcement, archive), and Kate Starbird’s group has done some preliminary analysis
  • Need to do something about the NESTA Call for Ideas, which is due “11am on Friday 9th November
  • Continuing with Market-Oriented Programming
    • Some thoughts on what the “cost” for a trip can reference
      • Passenger
        • Ticket price
          • provider: Current price, refundability, includes taxes
            • carbon
            • congestion
            • other?
          • consumer: Acceptable range
        • Travel time
        • Departure time
        • Arrival time (plus arrival time confidence)
        • comfort (legroom, AC)
        • Number of stops (related to convenience)
        • Number of passengers
        • Time to wait
        • Externalities like airport security, which adds +/- 2 hours to air travel
      • Cargo
        • Divisibility (ship as one or more items)
        • Physical state for shipping (packaged, indivisible solid, fluid, gas)
          • Waste to food grade to living (is there a difference between algae and cattle? Pets? Show horses?
          • Refrigerated/heated
          • Danger
          • Stability/lifespan
          • weight
      • Aggregators provide simpler combinations of transportation options
    • Any exchange that supports this format should be able to participate. Additionally, each exchange should contain a list of other exchanges that a consumer can request, so we don’t need another level of hierarchy. Exchanges could rate other exchanges as a quality measure
      • It also occurs to me that there could be some kind of peer-to-peer or mesh network for degraded modes. A degraded mode implies a certain level of emergency, which would affect the (now small-scale) allocation of resources.
    • Some stuff about Mobility as a Service. Slide deck (from Canada Intelligent Transportation Service), and an app (Whim)
  • PSC AI/ML working group 9:00 – 12:00, plus writeup
    • PSC will convene a working group meeting on Thursday, Oct. 18 from 9am – 10am to identify actions and policy considerations related to advancing the use of AI solutions in government. Come prepared to share your ideas and experience. We would welcome your specific feedback on these questions:
      • How can PSC help make the government a “smarter buyer” when it comes to AI/ML?
      • How are agencies effectively using AI/ML today?
      • In what other areas could these technologies be deployed in government today?
        • Looking for bad sensors on NOAA satellites
      • What is the current federal market and potential future market for AI/ML?
      • Notes:
        • How to help our members – federal contracts. Help make the federal market frictionless
        • Kevin – SmartForm? What are the main gvt concerns? Is it worry about False positives?
          • Competitiveness – no national strategy
          • Appropriate use, particularly law enforcement
          • Robotic Process Automation (RPA) Security, Compliancy, and adoption. Compliancy testing.
          • Data trust. Humans make errors. When ML makes the same errors, it’s worse.
        • A system that takes time to get accurate watching people perform is not the kind of system that the government can buy.
          • This implies that there has to be immediate benefit, and can have the possibility of downstream benefit.
        • Dell would love to participate (in what?) Something about cloud
        • Replacing legacy processes with better approaches
        • Fedramp-like compliance mechanism for AI. It is a requirement if it is a cloud service.
        • Perceived, implicit bias is the dominant narrative on the government side. Specific applications like facial recognition
        • Take a look at all the laws that might affect AI, to see how the constraints are affecting adoption/use with an eye towards removing barriers
        • Chris ?? There isn’t a very good understanding or clear linkage between the the promise and the current problems, such as staffing, tagged data, etc
        • What does it mean to be reskilled and retrained in an AI context?
        • President’s Management Agenda
        • The killer app is cost savings, particularly when one part of government is getting a better price than another part.
        • Federal Data Strategy
        • Send a note to Kevin about data availability. The difference between NOAA sensor data (clean and abundant), vs financial data, constantly changing spreadsheets that are not standardized. Maybe the creation of tools that make it easier to standardize data than use artisanal (usually Excel-based) solutions. Wrote it up for Aaron to review. It turned out to be a page.

Phil 10.17.18

7:00 – 4:00 Antonio Workshop

Phil 10.16.18

7:00 – 4:00 ASRC DARPA

  • Steve had some good questions about quantitative measures:
    • I think there are some good answers that we can provide here on determining the quality of maps. The number of users is an educated guess though. In my simulations, I can generate enough information to create maps using about 100 samples per agent. I’m working on a set of experiments that will produce “nosier” data that will provide a better estimate, but that won’t be ready until December. So we can say that “simulations indicate that approximately 100 users will have to interact through a total of 100 threaded posts to produce meaningful maps”
    • With respect to the maps themselves, we can determine quality in four ways. The mechanism for making this comparison will be bootstrap sampling (https://en.wikipedia.org/wiki/Bootstrapping_(statistics)), which is an extremely effective way of comparing two unknown distributions. In our case, the distribution will be the coordinate of each topic in the embedding space.
      1. Repeatability: Can multiple maps generated on the same data set be made to align? Embedding algorithms often start with random values. As such embeddings that are similar may appear different because they have different orientations. To determine similarity we would apply a least-squares transformation of one map with respect to the other. Once complete, we would expect a greater than 90% match between the two maps in success.
      2. Resolution: What is the smallest level of detail that can be rendered accurately? We will be converting words into topics and then placing the topics in an embedding space. As described in the document, we expect to do this with Non-Negative Matrix Factorization (NMF). If we factor the all discussions down to a single topic (i.e. “words”), then we will have a single point map that can always be rendered with 100% repeatability, but it has 0% precision. If, on the other hand, we can place every word in every discussion on the map, but the relationships are different every time, then we can have 100% precision, but 0% repeatability. As we cluster terms together, we need to compare repeated runs to see that we get similar clusters each time. We need to find the level of abstraction that will give us a high level of repeatability. A 90% match is our expectation.
      3. Responsiveness: Maps change over time. A common example is a weather map, though political maps shift borders and physical maps reflect geographic activity like shoreline erosion. This duration may reflect the accuracy of the map, with slow change happening across large scales while rapid changes are visible at higher resolutions. A change at the limit of resolution should ideally be reflected immediately in the map and not adjust the surrounding areas.
  • More frantic flailing to meet the deadline. DONE!!!

4:00 – 5:30 Antonio Workshop

Phil 10.9.18

7:00 – 4:00 ASRC BD

  • Drive to work in Tesla. Ride to pick up Porsche lunch-ish. Drive home with bike. Ride to work. Drive home with bike. Who knew that the Towers of Hanoi would be such practical training?
  • Finish Antonio response and send it off. I think it needs a discussion of the structure of the paper and who is responsible for which section to be complete.
  • Artificial Intelligence and Social Simulation: Studying Group Dynamics on a Massive Scale
    • Recent advances in artificial intelligence and computer science can be used by social scientists in their study of groups and teams. Here, we explain how developments in machine learning and simulations with artificially intelligent agents can help group and team scholars to overcome two major problems they face when studying group dynamics. First, because empirical research on groups relies on manual coding, it is hard to study groups in large numbers (the scaling problem). Second, conventional statistical methods in behavioral science often fail to capture the nonlinear interaction dynamics occurring in small groups (the dynamics problem). Machine learning helps to address the scaling problem, as massive computing power can be harnessed to multiply manual codings of group interactions. Computer simulations with artificially intelligent agents help to address the dynamics problem by implementing social psychological theory in data-generating algorithms that allow for sophisticated statements and tests of theory. We describe an ongoing research project aimed at computational analysis of virtual software development teams.
    • This appears to be a simulation/real world project that models GitHub groups
  • Continue BAA work? I need to know what Matt’s found out about the topic.
    • Some good discussion. Got his email of notes from his meeting with Steve
    • Created a “Disruptioneering technical” template
    • Copied template and stated filling in sections for technical
  • DARPA announced its new initiative, AI Next, which will invest $2 billion in AI R&D to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.” Since fiscal 2017, DARPA has stepped up its investment in artificial intelligence by almost 50 percent, from $307 million to $448 million.
  • DARPA’s move follows the Pentagon’s June decision to launch a $1.7 billion Joint Artificial Intelligence Center, or JAIC (pronounced “Jake”), to promote collaboration on AI-related R&D among military service branches, the private sector, and academia. The challenge is to transform relatively smaller contracts and some prototype systems development into large scale field deployment.

Phil 10.8.18

7:00 – 12:00, 2:00 – 5:00 ASRC Research

  • Finish up At Home in the Universe notes – done!
  • Get started on framing out Antonio’s paper – good progress!
    • Basically, Aaron and I think there is a spectrum of interaction that can occur in these systems. At one end is some kind of market, where communication is mediated through price, time, and convenience to the transportation user. At the other is a more top down, control system way of dealing with this. NIST RCS would be an example of this. In between these two extremes are control hierarchies that in turn interact through markets
  • Wrote up some early thoughts on how simulation and machine learning can be a thinking fast and slow solution to understandable AI

Phil 10.5.18

7:00 – 5:00 ASRC MKT

  • Seasucker.com for roof racks?
  • Continuing to write up notes from At Home in the Universe.
  • Discussion with Matt about the deliverables for the Serial Interactions in Imperfect Information Games Applied to Complex Military Decision-Making (SI3-CMD) request
  • DARPA proposal template
  • Resized images for Aaron
  • Biodiversity and the Balance of Nature
    • As we destroy biological diversity, what else are we doing to the environment, what is being changed, and how will those changes affect us? One part of the answers to these questions is provided by Lawton and Brown’s consideration of redundancy (Chap. 12). Part of what the environment does for us involves “ecosystem services” — the movement of energy and nutrients through the air, water, and land, and through the food chains (Ehrlich, Foreword). Just how much biological diversity we need to keep the movement at approximately natural levels is a question of critical importance. Nonetheless, it is not a question that is commonly asked. A major synthesis of theories on the dynamics of nutrient cycling (DeAngelis 1991) devotes little space to the consequences of changes in the numbers of species per trophic level: It is the number of trophic levels that receives the attention. One might well conclude that, over broad limits, ecosystem services will continue to be provided, so long as there are some plants, some animals, some decomposers, and so on. Lawton and Brown conclude that numerous species are redundant.

Phil 10.3.18

7:00 – 5:30 ASRC MKT

  • Finished At Home in the Universe. Really good. I’ll work on writing up notes this evening. The Kindle clippings feature is awesome
  • The stampeding robots paper is up on ArXiv: Disrupting the Coming Robot Stampedes: Designing Resilient Information Ecologies
  • Dopamine modulates novelty seeking behavior during decision making.
  • Need to finish Antonio’s paper, but my sense at this point is to add our work as a discussion of edge conditions that come up in the discussion section?
    • Done. Sent a letter discussing NIST RCS
  • Need to write up the fitness landscape thoughts. One axis is distance to model which is has a decay radius from each agent. Another axis is the price of an item(with future discounting?). Another axis is cost by agent to acquire the item. Cluster behavior emerges from local agents trying to find the best model and acquire the most value? There is also some kind of explicit connection between individuals that needs to be handled (a tanker and a plane have a client-server relationship that requires them to move in a coordinated way)
    • There is also information that is within the agents, and information that is in the environment. There may be other types of information as well.
  • Get Matt rolling on the whitepaper? – done!
  • Watson backend to A2P?
  • Kibitzed Aaron on how to access style sheets
  • Got about halfway through speaking notes on Army BAA

Phil 10.1.18

7:00 – 8:30 ASRC MKT?

  • Last Friday, Aaron was told by division leadership (Mike M) that R&D is being terminated as of Jan 1st and to get on billable projects. This is going against our impression of how things were going, so it’s unclear what will actually happen. So I’m not looking for a job just yet… Personally, I blame putting a deposit down on this: Tesla3
  • This looks interesting:
    • Launched in October 2015 by founding editor Robert Kadar with support from Joe Brewer, David Sloan Wilson, The Evolution Institute, and Steve Roth — who now serves as publisher — Evonomics has emerged as a powerful voice for the sea change that is sweeping through economics.
  • Working my way through At Home in the Universe
    • Fontana Lab
      • Molecular biology offers breathtaking views of the parts and processes that undergird life and its evolution. It is vexing, then, that we seem unable to analytically grasp the principles that would make the nature of cellular phenotypes more intelligible and their control more deliberate. One can always blame insufficient knowledge, but we also entertain the idea that physics and chemistry need formal and conceptual enrichment from computer science to become an appropriate foundation for systems biology. This view arises from the belief that computation is a natural phenomenon, like gravity or boiling water. We need adequate formalisms and models to reason about computation in the wild.This view guides many of our lab’s interests, which span the development and application of rule-based formalisms for modeling complex systems of molecular interaction, causality in concurrent systems, the interplay between network growth and network dynamics, phenotypic plasticity and evolvability, learning, and aging. Our approach is computational and theoretical. In the past we also conducted experimental work using C. elegans as a model system. Outside collaborations are essential to our group. The size of our team can fluctuate considerably, as we chase grants in pursuit of our passions, not opportunistically. Read more about our research.
  • Due date for the iConference Paper. Submitted last night just to be safe, but I expect to tweak today.
    • incorporating Wayne’s changes
    • Final push with Wayne on campus
    • Done! Submitted
    • Need to upload to ArXive (try multiple tex files)
  • From The Atlantic – stampede end condition:
    • It is impossible at this moment to envisage the Republican Party coming back. Like a brontosaurus with some brain-eating disorder it might lumber forward in the direction dictated by its past, favoring deregulation of businesses here and standing up to a rising China there, but there will be no higher mental functioning at work. And so it will plod into a future in which it is detested in a general way by women, African Americans, recent immigrants, and the educated young as well as progressives pure and simple. It might stumble into a political tar pit and cease to exist or it might survive as a curious, decaying relic of more savage times and more primitive instincts, lashing out and crushing things but incapable of much else.

Phil 9.28.18

7:30 – 4:00 ASRC MKT

  • Stumbled on this podcast this morning: How Small Problems Snowball Into Big Disasters
  • How to Prepare for a Crisis You Couldn’t Possibly Predict
  • I’m trying to think about how this should be applied to human/machine ecologies. I think that simulation is really important because it lets one model patch compare itself against another model without real-world impacts. This has something to do with a shared, multi-instance environment simulation as well. The environment provides one level of transparent interaction, but there also needs to be some level of inadvertent social information that shows some insight into how a particular system is working.
    • When the simulation and the real world start to diverge for a system, that needs to be signaled
    • Systems need to be able to “look into” other simulations and compare like with like. So a tagged item (bicycle) in one sim is the same in another.
    • Is there an OS that hands out environments?
    • How does a decentralized system coordinate? Is there an answer in MMOGs?
  • Kate Starbird’s presentation was interesting as always. We had a chance to talk afterwards, and she’d like to see our work, so I’ve sent her links to the last two papers.
    I also met Bill Braniff, who is the director of the UMD Study of Terrorism and responses to Terrorism. He got papers too, with a brief description about how mapping could aid in the detection of radicalization patterns
    Then at lunch, I had a chance to meet with Roger Bostelman from NIST. He’s interested in writing standards for fleet and swarm vehicles, and is interested in making sure that standards mitigate the chance of stampeding autonomous vehicles, so I sent him the Blue Sky draft.
    And lastly, I got a phone call from Aaron who says that our project will be terminated December 31, after which there will be no more IR&D at ASRC. It was a nice run while it lasted. And they may change their minds, but I doubt it.

Phil 9.27.18

7:00 – 6:00 ASRC MKT

  • Writing your own LaTex class
  • Multiple facets of biodiversity drive the diversity–stability relationship
    • A substantial body of evidence has demonstrated that biodiversity stabilizes ecosystem functioning over time in grassland ecosystems. However, the relative importance of different facets of biodiversity underlying the diversity–stability relationship remains unclear. Here we use data from 39 grassland biodiversity experiments and structural equation modelling to investigate the roles of species richness, phylogenetic diversity and both the diversity and community-weighted mean of functional traits representing the ‘fast–slow’ leaf economics spectrum in driving the diversity–stability relationship. We found that high species richness and phylogenetic diversity stabilize biomass production via enhanced asynchrony in the performance of co-occurring species. Contrary to expectations, low phylogenetic diversity enhances ecosystem stability directly, albeit weakly. While the diversity of fast–slow functional traits has a weak effect on ecosystem stability, communities dominated by slow species enhance ecosystem stability by increasing mean biomass production relative to the standard deviation of biomass over time. Our in-depth, integrative assessment of factors influencing the diversity–stability relationship demonstrates a more multicausal relationship than has been previously acknowledged.
  • Computer Algorithms, Market Manipulation and the Institutionalization of High Frequency Trading (adversarial herding?)
    • The article discusses the use of algorithmic models in finance (algo or high frequency trading). Algo trading is widespread but also somewhat controversial in modern financial markets. It is a form of automated trading technology, which critics claim can, among other things, lead to market manipulation. Drawing on three cases, this article shows that manipulation also can happen in the reverse way, meaning that human traders attempt to make algorithms ‘make mistakes’ by ‘misleading’ them. These attempts to manipulate are very simple and immediately transparent to humans. Nevertheless, financial regulators increasingly penalize such attempts to manipulate algos. The article explains this as an institutionalization of algo trading, a trading practice which is vulnerable enough to need regulatory protection.
  • Karin Knorr Cetina is interested in financial markets, knowledge and information, as well as in globalization, theory and culture. Her current projects include a book on global foreign exchange markets and on post-social knowledge societies. She continues to do research on the information architecture of financial markets, on their “global microstructures” (the global social and cultural form these markets take) and on trader markets in contrast to producer markets. She also studies globalization from a microsociological perspective, using an ethnographic approach, and she continues to be interested in “laboratory studies,” the study of science, technology and information at the site of knowledge production – particularly in the life sciences and in particle physics.
  • Reading A Sociology of Algorithms: High-Frequency Trading and the Shaping of Markets
    • Markets are politics,” (pg 8). I’d reverse that and say that politics are a market for power/influence, though that may be too glib.
    • three main types of algorithm discussed here (trading venues’ matching engines, which consummate trades; execution algorithms used by institutional investors to buy or sell large blocks of shares; and HFT algorithms), (pg 11)
    • a “lit” venue is one in which the electronic order book is visible to the humans and algorithms that trade on the venue; in a “dark” venue it is not visible.  (pg 11)
  • Meeting with USPTO folks. I went over their heads, but Aaron found the right level.

Phil 9.21.18

7:00 – 4:00 ASRC MKT

  • “Who’s idea was it to connect every idiot on the internet with every other idiot” PJ O’Rourke, Commonwealth Club, 2018
  • Running Programs In Reverse for Deeper A.I.” by Zenna Tavares
    • In this talk I show that inverse simulation, i.e., running programs in reverse from output to input, lies at the heart of the hardest problems in both human cognition and artificial intelligence. How humans are able to reconstruct the rich 3D structure of the world from 2D images; how we predict that it is safe to cross a street just by watching others walk, and even how we play, and sometimes win at Jenga, are all solvable by running programs backwards. The idea of program inversion is old, but I will present one of the first approaches to take it literally. Our tool ReverseFlow combines deep-learning and our theory of parametric inversion to compile the source code of a program (e.g., a TensorFlow graph) into its inverse, even when it is not conventionally invertible. This framework offers a unified and practical approach to both understand and solve the aforementioned problems in vision, planning and inference for both humans and machines.
  • Bot-ivistm: Assessing Information Manipulation in Social Media Using Network Analytics
    • Matthew Benigni 
    • Kenneth Joseph
    • Kathleen M. Carley (Scholar)
    • Social influence bot networks are used to effect discussions in social media. While traditional social network methods have been used in assessing social media data, they are insufficient to identify and characterize social influence bots, the networks in which they reside and their behavior. However, these bots can be identified, their prevalence assessed, and their impact on groups assessed using high dimensional network analytics. This is illustrated using data from three different activist communities on Twitter—the “alt-right,” ISIS sympathizers in the Syrian revolution, and activists of the Euromaidan movement. We observe a new kind of behavior that social influence bots engage in—repetitive @mentions of each other. This behavior is used to manipulate complex network metrics, artificially inflating the influence of particular users and specific agendas. We show that this bot behavior can affect network measures by as much as 60% for accounts that are promoted by these bots. This requires a new method to differentiate “promoted accounts” from actual influencers. We present this method. We also present a method to identify social influence bot “sub-communities.” We show how an array of sub-communities across our datasets are used to promote different agendas, from more traditional foci (e.g., influence marketing) to more nefarious goals (e.g., promoting particular political ideologies).
  • Pinged Aaron M. about writing an article
  • More iConf paper. Got a first draft on everything but the discussion section

Phil 9.19.18

7:00 – 5:30 ASRC MKT

  • More iConf paper
  • GSS Meeting?
  • Meeting with Wayne? No, he’s out till Thursday
  • Pinged Don about Aaron Mannes. He’s OOO as well
  • Understanding the interplay between social and spatial behaviour
    • Laura Alessandretti
    • Sune Lehmann
    • Andrea Baronchelli
    • According to personality psychology, personality traits determine many aspects of human behaviour. However, validating this insight in large groups has been challenging so far, due to the scarcity of multi-channel data. Here, we focus on the relationship between mobility and social behaviour by analysing trajectories and mobile phone interactions of 1000 individuals from two high-resolution longitudinal datasets. We identify a connection between the way in which individuals explore new resources and exploit known assets in the social and spatial spheres. We show that different individuals balance the exploration-exploitation trade-off in different ways and we explain part of the variability in the data by the big five personality traits. We point out that, in both realms, extraversion correlates with the attitude towards exploration and routine diversity, while neuroticism and openness account for the tendency to evolve routine over long time-scales. We find no evidence for the existence of classes of individuals across the spatio-social domains. Our results bridge the fields of human geography, sociology and personality psychology and can help improve current models of mobility and tie formation.
    • This looks to be a missing link paper that I can use to connect animal behavior in physical space and human behavior in belief space
  • A Sociology of Algorithms: High-Frequency Trading and the Shaping of Markets
    • Donald MacKenzie
      • My current research is on the sociology of markets, focusing on automated trading. I’ve worked in the past on topics ranging from the sociology of nuclear weapons to the meaning of proof in the context of computer systems critical to safety or security.
    • Computer algorithms are playing an ever more important role in financial markets. This paper proposes and exemplifies a sociology of algorithms that is (i) historical, in that it demonstrates path-dependence in the development of automated markets; (ii) ecological (in Abbott’s sense), in that it shows how automated high-frequency trading (HFT) is both itself an ecology and also is shaped by other linked ecologies (especially those of trading venues and of regulation); and (iii) “Zelizerian,” in that it highlights the importance of boundary work, especially of efforts to distinguish between (in effect) “good” and “bad” actors and algorithms. Empirically, the paper draws on interviews with 43 practitioners of HFT, and on a wider historical-sociology study (including interviews with a further 44 people) of the development of trading venues. The paper investigates the practices of HFT and analyses (in historical, ecological, and “Zelizerian” terms) how these differ in three different contexts (two types of share trading and foreign exchange).
  • A2P marketing meeting in Greenbelt
  • Long discussion on networks and the stiffness of links

Phil 9.17.18

7:00 – ASRC MKT

  • Dan Ariely Professor of psychology and behavioral economics, Duke University (Scholar)
    • Controlling the Information Flow: Effects on Consumers’ Decision Making and Preferences
      • One of the main objectives facing marketers is to present consumers with information on which to base their decisions. In doing so, marketers have to select the type of information system they want to utilize in order to deliver the most appropriate information to their consumers. One of the most interesting and distinguishing dimensions of such information systems is the level of control the consumer has over the information system. The current work presents and tests a general model for understanding the advantages and disadvantages of information control on consumers’ decision quality, memory, knowledge, and confidence. The results show that controlling the information flow can help consumers better match their preferences, have better memory and knowledge about the domain they are examining, and be more confident in their judgments. However, it is also shown that controlling the information flow creates demands on processing resources and therefore under some circumstances can have detrimental effects on consumers’ ability to utilize information. The article concludes with a summary of the findings, discussion of their application for electronic commerce, and suggestions for future research avenues.
      • This may be a good example of work that relates to socio-cultural interfaces.
  • Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment
    • Josiah Ober (Scholar)
    •  The Greeks had experts determine choices, and the public vote between the expert choices
    • A satisfactory model of decision-making in an epistemic democracy must respect democratic values, while advancing citizens’ interests, by taking account of relevant knowledge about the world. Analysis of passages in Aristotle and legislative process in classical Athens points to a “middle way” between independent-guess aggregation and deliberation: an epistemic approach to decision-making that offers a satisfactory model of collective judgment that is both time-sensitive and capable of setting agendas endogenously. By aggregating expertise across multiple domains, Relevant Expertise Aggregation (REA) enables a body of minimally competent voters to make superior choices among multiple options, on matters of common interest. REA differs from a standard Condorcet jury in combining deliberation with voting based on judgments about the reputations and arguments of domain-experts.
  • NESTA Center for Collective Intelligence Design
    • The Centre for Collective Intelligence Design will explore how human and machine intelligence can be combined to make the most of our collective knowledge and develop innovative and effective solutions to social challenges.
    • Call for ideas (JuryRoom!)
      • Nesta is offering grants of up to £20,000 for projects that generate new knowledge on how to advance collective intelligence (combining human and machine intelligence) to solve social problems.
  • Synchronize gdrive, subversion
  • Finish abstract review
  • Organize iConf paper into something more coherent
    • Created folder for lit review
  • Start putting together notes on At Home in the Universe?
  • Ping folks from SASO
    • Graph Laplacian paper
    • Cycling stuff
  • Fika?
  • Meeting with Wayne?

Phil 9.8.18

How intermittent breaks in interaction improve collective intelligence

  • Many human endeavors—from teams and organizations to crowds and democracies—rely on solving problems collectively. Prior research has shown that when people interact and influence each other while solving complex problems, the average problem-solving performance of the group increases, but the best solution of the group actually decreases in quality. We find that when such influence is intermittent it improves the average while maintaining a high maximum performance. We also show that storing solutions for quick recall is similar to constant social influence. Instead of supporting more transparency, the results imply that technologies and organizations should be redesigned to intermittently isolate people from each other’s work for best collective performance in solving complex problems.

Will Foreign Agents Rig the U.S. Midterm Elections Through Social Media?

  • Samantha Bradshaw, an expert on computational propaganda, weighs in on whether Facebook, Twitter, and others are doing enough to curb political social media bots.

Detecting signs of dementia using word vector representations

  • Recent approaches to word vector representations, e.g., ‘w2vec’ and ‘GloVe’, have been shown to be powerful methods for capturing the semantics and syntax of words in a text. The approaches model the co-occurrences of words and recent successful applications on written text have shown how the vector representations and their interrelations represent the meaning or sentiment in the text. Most applications have targeted written language, however, in this paper, we investigate how these models port to the spoken language domain where the text is the result of (erroneous) automatic speech transcription. In particular, we are interested in the task of detecting signs of dementia in a person’s spoken language. This is motivated by the fact that early signs of dementia are known to affect a person’s ability to express meaning articulately for example when they engage in a conversation – something which is known to be cognitively very demanding. We analyse conversations designed to probe people’s short and long-term memory and propose three different methods for how word vectors may be used in a classification setup. We show that it is possible to identify dementia from the output of a speech recognizer despite a high occurrence of recognition errors.

Phil 8.31.18

7:00 – 5:00 ASRC MKT

  • The lightning round slides are in!
  • Get Speaker – done
  • Get posters – done
  • Haircut – done
  • drop off DME/KLR – done
  • Under Pressure response – done, I think?
  • upload ML excel files (done) to play around with graph laplacians some more – done
  • Print out two travel packets – done
  • create shared itinerary document – started. Aaron needs to finish his part
  • From KQED Silicon Valley Conversations The Future of Music: Computer or Composer
    • Ge Wang is an Associate Professor at Stanford University in the Center for Computer Research in Music and Acoustics (CCRMA). He specializes in the art of computer music design — researching programming languages and interactive software design for music, interaction design, expressive mobile music, new performance ensembles (laptop orchestra and mobile phone orchestra), human-computer interaction, visualization (sndpeek), music game design, aesthetics of technology-mediated design, and methodologies for education at the intersection of art, engineering, and design.
    • Doug Eck is a research scientist working on Magenta, a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it’s also an exploration in building smart tools and interfaces that allow artists and musicians to extend (not replace!) their processes using these models. Started by me in 2016, Magenta now involves several researchers and engineers from the Google Brain team as well as many others collaborating via open source. Aside from Magenta, I’m working on sequence learning models for summarization and text generation as well new ways to improve AI-generated content based on user feedback.
    • Amy X Newburg has been developing her own brand of irreverently genre-crossing works for voice, live electronics and chamber ensembles for over 25 years, known for her innovative use of live looping technology with electronic percussion, her 4-octave vocal range and her colorful — often humorous — lyrics. One of the earliest performers to work with live digital looping, Amy has presented her solo “avant-cabaret” songs at such diverse venues as the Other Minds and Bang on a Can new music festivals, the Berlin International Poetry Festival, the Wellington and Christchurch Jazz Festivals (New Zealand), the Warsaw Philharmonic Hall, electronic music festivals, colleges, rock clubs and concert halls throughout the U.S. and abroad.
  • Teens, Social Media & Technology 2018
    • YouTube, Instagram and Snapchat are the most popular online platforms among teens. Fully 95% of teens have access to a smartphone, and 45% say they are online ‘almost constantly’
  • Aaron found this: Density-functional fluctuation theory of crowds
    • A primary goal of collective population behavior studies is to determine the rules governing crowd distributions in order to predict future behaviors in new environments. Current top-down modeling approaches describe, instead of predict, specific emergent behaviors, whereas bottom-up approaches must postulate, instead of directly determine, rules for individual behaviors. Here, we employ classical density functional theory (DFT) to quantify, directly from observations of local crowd density, the rules that predict mass behaviors under new circumstances. To demonstrate our theory-based, data-driven approach, we use a model crowd consisting of walking fruit flies and extract two functions that separately describe spatial and social preferences. The resulting theory accurately predicts experimental fly distributions in new environments and provides quantification of the crowd “mood”. Should this approach generalize beyond milling crowds, it may find powerful applications in fields ranging from spatial ecology and active matter to demography and economics.
    • Here’s an interesting part: The DFFT analysis that we present is particularly powerful because it separates the influence of the environment on agents from interactions among those agents. 
      • This implies that it should (could? might?) be possible to calculate a social/environmental ratio for individual agents. High environmental are nomadic. High social are stampede-prone. Need to dig in further.
  • Mechanical Vibrations and Waves » Lecture 4: Coupled Oscillators, Normal Modes
    Lecture 4: Coupled Oscillators, Normal Modes (MIT opencourseware)

    • Prof. Lee analyzes a highly symmetric system which contains multiple objects. By physics intuition, one could identify a special kind of motion – the normal modes. He shows that there is a general strategy for solving the normal modes.
      • Every part of the system is oscillating at the same frequency and the same phase
      • Stopped at 42:07 to take a break. I think this is the right track though. Download this for the plane?
  • Chapter on normal modes