Phil 10.3.17

Phil 7:00 – 5:00 ASRC MKT

1 thought on “Phil 10.3.17

  1. Cindy Flatley

    Phil, regarding the NASA risk-scenario data I’ve been trying to track down for you, could you provide a bit more information, such as the resources where it was mentioned? I was able to track down a description of the psychological tests conducted to screen Mercury astronauts here – (starts on page 83) – but no description of a test matching those conditions is listed. If you could point me to the resource where you saw it mentioned (perhaps somewhere in Moscovici’s work?), I might be able to pull out better search terms or researcher names to track that down for you.
    I am worried, though, that if the risk-scenario data originates from psychological screening done on potential first astronauts, selection bias may have a skewing effect on their behavior. The first astronauts were selected from a very narrow population of men, within certain ages, intellectual qualifications, and occupational backgrounds. I expect that they would skew toward explorer behavior far more than the average population.

    While researching risk scenario data, I came across this paper that – while not directly related – I thought might have implications for your game design or reporting work.
    “The goal of this SBIR program was to provide authorable, dialog-enabled agents for tutoring and performance support systems. Users interact with agents who carry out strategies and goals and can engage in mixed-initiative dialog via a natural language understanding and generation system. Non-programmers can author new domains and scenarios and create new dialog agents. The dialog system is authorable by non-computational linguists. The system has two types of agents, Mentor agents and Conversational agents. The Mentor agent is a simulated subject matter expert (SME) that provides troubleshooting and problem solving advice. Mentor engages in a dialogue with trainees, helping them solve problems by taking them through logical courses of action and asking and answering domain-specific questions. Conversational agents are used for role-playing scenarios. The only real difference between the two agents is that Conversational agents do not have specific problem solving strategies. Both Mentors and Conversational agents have domain specific knowledge and access to a common sense knowledge base. This report describes the capabilities and limitations of results of this Phase II effort.”

    Separately, I’ve tracked down the work I mentioned to you that relates to a semantic spatial model for interacting with large sets of documents. You might find Dr. Bradel’s research useful, as it has implications for making your own research easier, information foraging, and design of information retrieval systems.
    Multi-Model Semantic Interaction for Scalable Text Analytics
    “Learning from text data often involves a loop of tasks that iterate between foraging for
    information and synthesizing it in incremental hypotheses. Past research has shown the
    advantages of using spatial workspaces as a means for synthesizing information through
    externalizing hypotheses and creating spatial schemas. However, spatializing the entirety
    of datasets becomes prohibitive as the number of documents available to the analysts
    grows, particularly when only a small subset are relevant to the tasks at hand. To address
    this issue, we developed the multi-model semantic interaction (MSI) technique, which
    leverages user interactions to aid in the display layout (as was seen in previous semantic
    interaction work), forage for new, relevant documents as implied by the interactions, and
    then place them in context of the user’s existing spatial layout. This results in the ability
    for the user to conduct both implicit queries and traditional explicit searches. A
    comparative user study of StarSPIRE discovered that while adding implicit querying did
    not impact the quality of the foraging, it enabled users to 1) synthesize more information
    than users with only explicit querying, 2) externalize more hypotheses, 3) complete more
    synthesis-related semantic interactions. Also, 18% of relevant documents were found by
    implicitly generated queries when given the option. StarSPIRE has also been integrated
    with web-based search engines, allowing users to work across vastly different levels of
    data scale to complete exploratory data analysis tasks (e.g. literature review, investigative
    The core contribution of this work is multi-model semantic interaction (MSI) for usable
    big data analytics. This work has expanded the understanding of how user interactions
    can be interpreted and mapped to underlying models to steer multiple algorithms
    simultaneously and at varying levels of data scale. This is represented in an extendable
    multi-model semantic interaction pipeline. The lessons learned from this dissertation
    work can be applied to other visual analytics systems, promoting direct manipulation of
    the data in context of the visualization rather than tweaking algorithmic parameters and
    creating usable and intuitive interfaces for big data analytics.”
    Another article on spatial manipulation of big data focusing more on user interactions:

    Click to access e1ef36d49dbf13d7016e3da867334d3068a6.pdf

    “To tackle the onset of big data, visual analytics (VA) seeks to marry the human intuition of visualization with mathematical models’ analytical horsepower. A critical question is, how will humans interact with and steer these complex mathematical models? Initially, users applied direct manipulation to such models in the same way they applied it to simpler visualizations in the premodel era—by using control panels to directly manipulate model parameters. However, opportunities are arising for direct manipulation of the model outputs, where the users’ thought processes take place, rather than the inputs. Here we present this new agenda for direct manipulation for VA.”

Comments are closed.