Workshops
LLM4Good: The 2nd Workshop on Sustainable and Trustworthy Large Language Models for Personalization
Organizers: Ahmadou Wagne, Thomas Elmar Kolb, Ashmi Banerjee, Julia Neidhardt and Yashar Deldjoo
Description: Large Language Models (LLMs) are transforming personalized services by enabling adaptive, context-aware recommendations and interactions. However, deploying these models at scale raises significant concerns about environmental impact, fairness, privacy, and trustworthiness, including high energy consumption, biased outputs, privacy breaches, and hallucinations. The LLM4Good workshop was already hosted at UMAP’25 and is a half-day workshop that addresses these challenges by fostering dialogue on sustainable and ethical approaches to LLM-based personalization. Participants will explore energy-efficient techniques, bias mitigation, privacy-preserving methods, and responsible deployment strategies. The workshop aligns with Sustainable Development Goals and Digital Humanism principles. It aims to guide the development of trustworthy, human-centric LLM systems that positively impact education, healthcare, and other domains.
ExUM 2026: The 8th Workshop on Explainable User Modeling and Personalised Systems
Organizers: Cataldo Musto, Amra Delić, Marco Polignano, Amon Rapp, Giovanni Semeraro and Jürgen Ziegler
Description: Adaptive and personalized systems increasingly mediate everyday digital experiences, and recent advances in LLMs, NLP, and Generative AI have amplified their reach, from intelligent user interfaces and conversational agents to AR/immersive interactions and autonomous assistants. These methods now support applications in health and well-being, behavior change and persuasion, e-learning and educational games, and group modeling for collaboration and team formation, all of which require increasingly rich and dynamic user models.
At the same time, modern pipelines based on data mining, knowledge graphs/linked data, semantic representations, and affective computing raise urgent questions about transparency, privacy, fairness, accountability, and user understanding, reinforced by regulatory expectations such as the GDPR right to explanation. Yet research often optimizes personalization performance without comparable attention to interpretability and human comprehension. This workshop provides a forum for theoretical, methodological, and empirical work that bridges effectiveness and explainability, with emphasis on robust human-centered evaluation and reproducible practices, including benchmarks, datasets, and shared challenges that advance trustworthy personalization in an era of increasingly autonomous AI.
SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE) 2026
Organizers: Fabio Gasparetti, Cristina Gena, Styliani Kleanthous and Giuseppe Sansonetti
Description: The SOCIALIZE workshop aims to bring together all interested in developing interactive and personalized techniques to foster social and cultural integration of a broad range of users. More specifically, we intend to attract research that considers the interaction peculiarities typical of different realities, focusing on disadvantaged and at-risk categories (e.g., refugees and migrants) and vulnerable groups (e.g., children, elderly, autistic, and disabled people). We are also interested in human-robot interaction methods and models to develop social robots, that is, autonomous robots that interact with people by engaging in socially-affective behaviors, abilities, and rules related to their collaborative role.
The 17th International Workshop on Personalized Access to Cultural Heritage (PATCH 2026)
Organizers: Liliana Ardissono, Tsvi Kuflik, Noemi Mauro and Alan Wecker
Description: Following the successful series of PATCH workshops, PATCH 2026 will be again the meeting point between state-of-the-art cultural heritage research and personalization – using any technology, while focusing on ubiquitous and adaptive scenarios, to enhance the personal experience in natural and cultural heritage sites. The workshop is aimed at bringing together researchers and practitioners interested in exploring the potential of ICT technology (onsite and online) to enhance the visit experience. The expected result of the workshop is sharing and discussing novel ideas and creating a multidisciplinary research agenda that will inform future research directions and hopefully, forge some research collaborations.
GMAP 2026: 5th Workshop on Group Modeling, Adaptation and Personalization
Organizers: Francesco Barile, Amra Delić, Ladislav Peska and Cedric Waterschoot
Description: While most existing HCI and decision-support systems are designed to support single users, there are scenarios where these systems should consider the needs of groups. In these cases, specific challenges have to be addressed. Collective factors – such as interpersonal relationships, group mood, and emotional contagion – play a crucial role in group dynamics. Still, they are often ill-defined and absent from systems’ modeling. Furthermore, producing fair, privacy-protecting, and explainable recommendations is a notorious challenge of group recommender systems. The potential of large language models to enhance explainability and tackle these challenges is still under-explored. Lastly, the problem of defining a comprehensive evaluation methodology that covers the particularities of group recommender systems is a long-standing issue in the field. The 5th GMAP workshop aims to bring together a community of researchers from multiple disciplines, including Psychology, Computer Science, and Organizational Behavior. In this workshop, researchers have the opportunity to share research and ideas, fostering a vibrant and inclusive community and creating opportunities for networking and collaboration. Such communities will contribute to advancing our understanding of group modeling, adaptation and personalization, identifying key challenges and opportunities, and developing a shared research agenda to guide future works in the field.
The 14th International Workshop on News Recommendation and Analytics (INRA 2026)
Organizers: Célina Treuillier, Andreea Iana, Vandana Yadav, Benjamin Kille, Andreas Lommatzsch and Özlem Özgöbek
Description: News and news recommender systems play a central role in shaping how people understand and interpret the world. Recent advances in generative AI have opened up new possibilities for content creation, personalization, multimodal analysis and interaction. While these technologies enhance automation and efficiency in news production and recommendation, they also introduce risks such as inconsistencies, misinformation, opinion polarization, and declining user trust.
At the same time, the growing technical complexity of these methods, together with diverging regulatory requirements for fairness, transparency, and accountability in AI systems, poses significant challenge for news recommendation. The 14th International Workshop on News Recommendation and Analytics (INRA) serves as a dedicated venue for researchers and practitioners from various disciplines to share insights and explore how human-centered approaches can address this challenge by providing fair, transparent, sustainable and user-respecting news recommendations.
ParaAdapt: Parametric User Modeling & Adaptation for Navigating Complex Interactive Domains
Organizers: Swen Gaudl, Mark J. Nelson and Günter Wallner
Description: This workshop explores parametric interaction design as a practical approach for understanding users, building interpretable user models, and enabling adaptation/personalization across interactive artefacts and interfaces (educational tools, creativity support systems, data/AI interfaces, XR, interactive installations, playful systems, and games). A core focus is on how users navigate complex domains: layered information, multi-step tasks, conceptual spaces, and large design/decision spaces. We connect user modelling and adaption with interaction design by treating adaptation as explicit, inspectable parametric reconfiguration—tuning guidance, constraints, information density, mappings, feedback, pacing, and content variability—rather than opaque personalization. Inspired by hands-on design activities (interaction design studios, participatory design, iterative prototyping, in-the-wild evaluation; and as one motivating example, rapid game jams using the casual game creator ParaVida), we leverage parametric traces (parameter trajectories + interaction logs + design rationale) to model navigation strategies, expertise, uncertainty, and help-seeking. The workshop also welcomes work where parametric controls govern procedural content generation (PCG) or other generative systems as important application domains. We will produce a shared taxonomy of “complexity controls”, adaptation pattern cards, and a community roadmap/whitepaper and repository. Human-centered AI perspectives-transparency, agency, accessibility-will be integrated as design constraints to guide the development of the produced material.
Trustworthy and Adaptive LLMs for Mental and Physical Wellbeing in Recommendations
Organizers: Zhu Sun, Yi Ding, Yin Leng Theng, Wei Quin Yow, Roy Ka-Wei Lee and Xun Jiang
Description: Large Language Models (LLMs) are rapidly transforming recommender systems (RSs) and user modeling by enabling richer representations of users, context, and intent. In wellbeing-oriented applications, such as activity, diet, stress, and mental health support, the use of LLM-based RSs introduces both new opportunities and critical challenges related to trustworthiness, adaptivity, explainability, privacy, and human-centered evaluation. This workshop aims to bring together researchers and industry practitioners from user modeling, personalization, RSs and healthcare to examine how LLM-based models can be responsibly designed, evaluated, and deployed for mental and physical wellbeing, fostering AI for social good. The workshop focuses on adaptive LLM-based user modeling, trustworthy recommendation mechanisms, LLM-powered user simulation, and evaluation methodologies that capture longitudinal and affective user outcomes. Strongly aligned with the UMAP community, the workshop emphasizes adaptive and personalized systems that place human values at the center. Through paper presentations, invited talks, and interactive discussions, the workshop will surface open challenges, emerging best practices, and future research directions for LLM-driven wellbeing recommendation.
8th UMAP Workshop on Fairness in User Modeling, Adaptation, and Personalization (FairUMAP 2026)
Organizers: Bamshad Mobasher, Styliani Kleanthous, Robin Burke, Avital Shulner-Tal and Tsvi Kuflik
Description: Machine learning, recommender systems, and user modeling are key enabling technologies used in personalized intelligent systems. However, there has been growing recognition that these underlying technologies raise novel ethical, policy, and legal challenges, particularly concerning trust and trustworthiness from both user and system perspectives. From a user perspective, trust depends on perceptions of how the system behaves and impacts them, while system trustworthiness relates to the properties and design of the system itself. System properties such as fairness and transparency, balance, openness to diversity, and other social welfare and ethical considerations can be understood as core aspects that shape both user trust and system trustworthiness, yet they are not always captured by typical metrics used to optimize data-driven personalized models. Bias, fairness, transparency, and trustworthiness in machine learning are topics of considerable recent research interest, but more work is needed in algorithmic and modeling approaches where user modeling and personalization are central. The 8th edition of this workshop will bring together experts from academia and industry to explore mechanisms and modeling approaches that help mitigate bias, achieve fairness, and foster trustworthy personalized systems. The workshop aligns with the themes of the UMAP 2026 conference and will enable in-depth discussion of fairness- and trust-related problems in an interactive setting.
First International Workshop on User Modeling, Personalization, and Adaptive Systems for Sustainability and Social Good (UMAP4Good 2026)
Organizers: Allegra De Filippo, Angelo Geninatti Cossatin, Elisabeth Lex, Noemi Mauro, Giacomo Medda and Giuseppe Spillo
Description: In an era where personalization technologies increasingly shape human decision-making, integrating sustainability and social good into user-adaptive systems has become an essential challenge. Building on recent advances in User Modeling, Personalization, and Adaptive Systems, this workshop aims to consolidate a research agenda focused on how personalization can support sustainable behaviors, ethical decision-making, inclusion, and positive societal impact. Personalized and adaptive systems influence users’ choices across many contexts—health, mobility, education, media consumption, and more. Their role in shaping long-term behavioral change positions them as key technologies for supporting the UN Sustainable Development Goals and broader sustainability initiatives.
This workshop provides an interdisciplinary venue for researchers, practitioners, and policymakers to explore theoretical, methodological, and practical advances at the intersection of sustainability, personalization, and adaptive systems. Through presentations, discussions, and interactive sessions, the workshop aims to stimulate knowledge exchange, collaboration, and define emerging challenges for developing sustainable, inclusive, and socially responsible personalized systems.