Got accepted for my talk at the 92nd MORS symposium!
IUI 2024 Notes
I had an interesting chat yesterday with Ossi about Active Measures Leading up to and since October 7. We need some time to sit down and talk. My sense is that all sides have been under prolonged external influence for a long time with the specific intent to raise the political temperature so that exactly this situation happened.
Keynote: Prof. Krzysztof Gajos (check Google Scholar for references)
- Predictive text can manipulate the users, who wind up reflection the biases of the predictive text model. Change the organization’s model, change the bias of the organization.
- The mere presence of an explanation increases the credibility of AI assistance, regardless of the content. Fact-like assertions increase the perceived competence of the AI. This is a dark pattern that needs to be detected.
- Learning means that cognitive engagement occurs, but AI answers vs. cognitive forcing does not impact the amount of learning
- Providing the material to support a decision but not a decision suggestion, worked better than any answer-based decision aids. This may be a key for complementation
- Denial of a request is treated emotionally, not cognitively. This is another vector that needs to be recognized and adapted to. A source could be paper submission rejects.
- Critical techical practice – question assumptions
HCAI, Bias and Fairness in AI
- BiasEye: A Bias-Aware Real-time Interactive Material Screening System for Impartial Candidate Assessment – really good visualizations for decision support. An interesting use of an LLM to take the data from a form. The rest is mostly statistical comparisone between students. I wonder if this would be another way to have operators rate models. One of the most interesting visualizations was to use TSNE to cluster similar students on a canvas, where each student was a set of concentric pie charts that showed the difference between actual and expected performance. Also, lot’s of culture-specific measures such as “honor”
- Understanding Users’ Dissatisfaction with ChatGPT Responses: Types, Resolving Tactics, and the Effect of Knowledge Level – looks like it might be a really good source of working out a framework for evaluating effective conversational interfaces – what works, what doesn’t, etc.
AI Tools, User Interfaces and Interaction
- SpaceEditing: A Latent Space Editing Interface for Integrating Human Knowledge into Deep Neural Networks – This is an interesting take on hand-tweaking the manifold projection. It might be very good for DTA
AI for Health
- How Do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems – Informational awareness. Nice perspective on AI Operators
Dinner Banquet – fun 🙂
Had a thought for the day. For a learning assignment, have students build a context prompt that lets an LLM answer a question on the rubric correctly. Bonus points if the models is able to answer a question that is outside the domain where a raw LLM has struggled. This way you have a project that requires students learning the topic, and also exposes them to weaknesses and strengths of LLMs. Not sure if this is a good idea, but it could be worth poking at.
