User Modeling and User-Adapted Interaction (UMUAI) provides an interdisciplinary forum for the dissemination of new research results on interactive computer systems that can be adapted or adapt themselves to their current users, and on the role of user models in the adaptation process.

UMUAI has been published since 1991 by Kluwer Academic Publishers (now merged with Springer Verlag).

UMUAI homepage with description of the scope of the journal and instructions for authors.

Springer UMUAI page with online access to the papers.

Latest Results for User Modeling and User-Adapted Interaction

04 December 2021

The latest content available from Springer
  • James Chen Annual Award for Best Journal Article
  • Dynamic aspect-based rating system and visualization


    With an increasing number of product reviews available online, it has become impractical for potential customers to perceive all the available reviews in order to make an informed decision on their purchase. Product ratings that encapsulate product reviews swiftly and easily have become an alternative for customers. However, since several product ratings only display the overall rating, customers may still find it challenging to make an informed decision due to the lack of information between positive and negative reviews. In addition, existing product ratings are static in nature as they do not cater to customers’ different needs since they often prioritize different aspects of the product or product features. Accordingly, this paper proposes a dynamic aspect-based rating system accompanied by an aspect-based rating visualization to address the aforementioned problems. This rating system also considers the users’ reputations who have given product reviews to give a more holistic view of the users posting reviews. Moreover, our user study shows that our proposed rating visualization can be a competitive alternative in representing a product rating since it has the advantage of being informative and easily customized due to its ability to display rating scores based on users’ preferred aspects. In addition, the proposed visualization also enables customers to make more informed decisions since it displays a balance of both positive and negative reviews.

  • Transferring recommendations through privacy user models across domains


    Although privacy settings are important not only for data privacy, but also to prevent hacking attacks like social engineering that depend on leaked private data, most users do not care about them. Research has tried to help users in setting their privacy settings by using some settings that have already been adapted by the user or individual factors like personality to predict the remaining settings. But in some cases, neither is available. However, the user might have already done privacy settings in another domain, for example, she already adapted the privacy settings on the smartphone, but not on her social network account. In this article, we investigate with the example of four domains (social network posts, location sharing, smartphone app permission settings and data of an intelligent retail store), whether and how precise privacy settings of a domain can be predicted across domains. We performed an exploratory study to examine which privacy settings of the aforementioned domains could be useful, and validated our findings in a validation study. Our results indicate that such an approach works with a prediction precision about 15%–20% better than random and a prediction without input coefficients. We identified clusters of domains that allow model transfer between their members, and discuss which kind of privacy settings (general or context-based) leads to a better prediction accuracy. Based on the results, we would like to conduct user studies to find out whether the prediction precision is perceived by users as a significant improvement over a “one-size-fits-all” solution, where every user is given the same privacy settings.

  • Online convex combination of ranking models


    As a task of high importance for recommender systems, we consider the problem of learning the convex combination of ranking algorithms by online machine learning. First, we propose a stochastic optimization algorithm that uses finite differences. Our new algorithm achieves close to optimal empirical performance for two base rankers, while scaling well with an increased number of models. In our experiments with five real-world recommendation data sets, we show that the combination offers significant improvement over previously known stochastic optimization techniques. The proposed algorithm is the first effective stochastic optimization method for combining ranked recommendation lists by online machine learning. Secondly, we propose an exponentially weighted algorithm based on a grid over the space of combination weights. We show that the algorithm has near-optimal worst-case performance bound. The bound provides the first theoretical guarantee for non-convex bandits using limited number of evaluations under very general conditions.

  • Improving accountability in recommender systems research through reproducibility


    Reproducibility is a key requirement for scientific progress. It allows the reproduction of the works of others, and, as a consequence, to fully trust the reported claims and results. In this work, we argue that, by facilitating reproducibility of recommender systems experimentation, we indirectly address the issues of accountability and transparency in recommender systems research from the perspectives of practitioners, designers, and engineers aiming to assess the capabilities of published research works. These issues have become increasingly prevalent in recent literature. Reasons for this include societal movements around intelligent systems and artificial intelligence striving toward fair and objective use of human behavioral data (as in Machine Learning, Information Retrieval, or Human–Computer Interaction). Society has grown to expect explanations and transparency standards regarding the underlying algorithms making automated decisions for and around us. This work surveys existing definitions of these concepts and proposes a coherent terminology for recommender systems research, with the goal to connect reproducibility to accountability. We achieve this by introducing several guidelines and steps that lead to reproducible and, hence, accountable experimental workflows and research. We additionally analyze several instantiations of recommender system implementations available in the literature and discuss the extent to which they fit in the introduced framework. With this work, we aim to shed light on this important problem and facilitate progress in the field by increasing the accountability of research.