Keynote Speakers

Title: One-size Explanations still do not fit all

Abstract: Many people recognize the importance of explainable AI, yet we often lose sight of the purpose of these explanations. This talk explores why we should develop decision support systems that can explain themselves. It highlights state-of-the-art explanation approaches across domains such as music, tourism, and search results, illustrating how they help align the mental models of systems and users. At the same time, generating rich and complex explanations is not enough to support effective decision-making. Designers must carefully decide which information to present and how to present it, taking into account the target users and contextual factors. With the rise of generative AI, these continue to be valid research questions, even if generating the explanations can at least at first sight seem quicker or easier. We must also reconsider what has changed: new challenges—beyond hallucinations—have become more prominent, including subtle stereotyping and data leakage
 
Bio: Prof. Nava Tintarev is a Full Professor in Explainable AI at Maastricht University in the Department of Advanced Computing Sciences (DACS). Her interdisciplinary work bridges computer science and human-centered evaluation, focusing on making AI systems more transparent and increasing user control. Prof. Tintarev specializes in developing interactive explanation interfaces for recommender systems and search technologies, emphasizing user empowerment and decision support. She is a lab director of the ICAI TAIM lab, working on trustworthy AI in media. She was a founding co-investigator of ROBUST, a €87 million Dutch national initiative advancing trustworthy AI. Her research has also been supported by major organizations including IBM, Twitter, and the European Commission. Recognized as a Senior Member of the ACM in 2020, Prof. Tintarev’s team has received multiple best paper awards for research contributions at UMAP, CHI, CHIIR, Hypertext, and HCOMP. She is also active in Dutch developing research policy for computer science in the Netherlands, as the Chair of the round table member for informatics, and as a board member of the ICT-Research Platform Netherlands (IPN). 

Title: Decoding Gameplay: Foundation Models and the Future of Player Context

Abstract: The success of Foundation Models (FMs) in natural language often masks the significant hurdles of applying them to high-frequency, non-textual human behavior. This talk explores the transition to User Foundation Models, using the multi-dimensional environment of gaming as a lens for understanding human-system interaction at scale. The session addresses the friction between building universal representations and maintaining the precision required for task-specific predictions. By examining the challenges of non-text modalities and the limitations of current time-series approaches, the talk outlines the practical realities of moving beyond language to model complex, dynamic human activities. Looking toward future directions, the talk explores whether the field is heading toward new paradigms to capture the underlying dynamics of user environments. This keynote highlights how these evolving frameworks enable a more granular understanding of player behavior, providing a path toward robust player representations that bridge the gap between raw telemetry and the actual player experience.

 

Bio: Sahar Asadi is Director of AI Labs at King, part of Microsoft Gaming, where she leads industrial research on reinforcement learning, foundation models, representation learning, and player behavior understanding. Her research centers on translating advances in AI into practical applications, with a particular emphasis on systems that improve player experience and support game development. She received her Ph.D. in Computer Science from Örebro University and has held research and applied machine learning roles at Spotify, Meltwater, and Clusterone. She has co-authored work published at NeurIPS, AAAI, EMNLP, IDA, AIIDE, IEEE Transactions on Games, and delivered keynotes at IEEE Conference on Games, GDC, and FDG. She serves as Associate Editor at IEEE Transactions on Games and an Area Chair of ECML PKDD.

Title: Zero to One-Shot Personalization with LLMs

Abstract: For decades, the field of personalization has wrestled with a fundamental tension: the value of adaptation is often eclipsed by the cost of data collection. Lengthy onboarding flows, explicit preference elicitation, and bespoke statistical models were once the unavoidable barriers to entry for any adaptive system. Large language models invert this reality. By ingesting unstructured behavioral data directly and adapting within a single conversation, they enable meaningful personalization in contexts where it was previously too slow or too complex to deliver. This shift fundamentally redefines both enterprise and consumer systems. In the workplace, the multimodal exhaust of everyday work—clickstreams, screen recordings, and natural-language feedback—can now be synthesized into personalized, actionable insights, allowing systems to learn an organization’s standards from collective behavior rather than manual configuration. In consumer settings, where users expect adaptation without effort, effective personalization mirrors the expertise of human advisors like stylists or tutors: it emerges through interaction, inferring needs from reactions and resolving ambiguity conversationally. Across both contexts, the central design question remains the same: not whether personalization is technically feasible, but whether we are building interactions worthy of driving it.

 

Bio: Ranjitha Kumar is an Associate Professor in the Siebel School of Computing and Data Science and (by courtesy) the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She leads the Data Driven Design Group, where her research focuses on the intersection of machine learning and effective user experiences. She also serves as the Director of the Innovation Leadership and Engineering Entrepreneurship (ILEE) Program in the Grainger College of Engineering.

In addition to her academic leadership, she is the Chief Scientist at UserTesting. Since 2019, she has guided the company’s AI-product strategy, working to bridge quantitative and qualitative experience testing. She received her B.S. and Ph.D. from the Department of Computer Science at Stanford University. Based on her dissertation work, she co-founded Apropose, a data-driven design startup backed by top Silicon Valley venture capital firms.

Scroll to Top