Keynote Speakers
Title: One-size Explanations still do not fit all
Title: Decoding Gameplay: Foundation Models and the Future of Player Context
Abstract: The success of Foundation Models (FMs) in natural language often masks the significant hurdles of applying them to high-frequency, non-textual human behavior. This talk explores the transition to User Foundation Models, using the multi-dimensional environment of gaming as a lens for understanding human-system interaction at scale. The session addresses the friction between building universal representations and maintaining the precision required for task-specific predictions. By examining the challenges of non-text modalities and the limitations of current time-series approaches, the talk outlines the practical realities of moving beyond language to model complex, dynamic human activities. Looking toward future directions, the talk explores whether the field is heading toward new paradigms to capture the underlying dynamics of user environments. This keynote highlights how these evolving frameworks enable a more granular understanding of player behavior, providing a path toward robust player representations that bridge the gap between raw telemetry and the actual player experience.
Title: Zero to One-Shot Personalization with LLMs
Abstract: For decades, the field of personalization has wrestled with a fundamental tension: the value of adaptation is often eclipsed by the cost of data collection. Lengthy onboarding flows, explicit preference elicitation, and bespoke statistical models were once the unavoidable barriers to entry for any adaptive system. Large language models invert this reality. By ingesting unstructured behavioral data directly and adapting within a single conversation, they enable meaningful personalization in contexts where it was previously too slow or too complex to deliver. This shift fundamentally redefines both enterprise and consumer systems. In the workplace, the multimodal exhaust of everyday work—clickstreams, screen recordings, and natural-language feedback—can now be synthesized into personalized, actionable insights, allowing systems to learn an organization’s standards from collective behavior rather than manual configuration. In consumer settings, where users expect adaptation without effort, effective personalization mirrors the expertise of human advisors like stylists or tutors: it emerges through interaction, inferring needs from reactions and resolving ambiguity conversationally. Across both contexts, the central design question remains the same: not whether personalization is technically feasible, but whether we are building interactions worthy of driving it.
Bio: Ranjitha Kumar is an Associate Professor in the Siebel School of Computing and Data Science and (by courtesy) the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She leads the Data Driven Design Group, where her research focuses on the intersection of machine learning and effective user experiences. She also serves as the Director of the Innovation Leadership and Engineering Entrepreneurship (ILEE) Program in the Grainger College of Engineering.
In addition to her academic leadership, she is the Chief Scientist at UserTesting. Since 2019, she has guided the company’s AI-product strategy, working to bridge quantitative and qualitative experience testing. She received her B.S. and Ph.D. from the Department of Computer Science at Stanford University. Based on her dissertation work, she co-founded Apropose, a data-driven design startup backed by top Silicon Valley venture capital firms.