SESSION: Session 1: Late – breaking Results and Demo
It is our great pleasure to welcome you to the UMAP 2020 LBR and Demo Track, in conjunction with the 28th Conference on User Modelling, Adaptation and Personalization, held online from July 12 to 18, 2020. This track encompasses two categories: (i) Demos, which showcase research prototypes and commercially available products of UMAP-based systems, (ii) Late-breaking Results (LBR), which contain original and unpublished accounts of innovative research ideas, preliminary results, industry showcases, and system prototypes, addressing both the theory and practice of UMAP. The submissions spanned a wide scope of topics, ranging from novel techniques for user and group modeling, to adaptation and personalization implementations across different application scenarios. We received 25 LBR and 4 Demo submissions. Each submission was carefully reviewed by at least 3 members of the Demo and LBR program committee, which consisted of 55 members. Out of this total of 29 submissions, 17 LBR and 2 Demos were deemed of good quality by the reviewers, and were consequently accepted (65% overall acceptance rate). They were presented in the UMAP poster and demo sessions, which collectively showcased the wide spectrum of novel ideas and latest results in user modelling, adaptation and personalization. We would like to thank the members of the Program Committee for their reviews and discussions. We truly express our gratitude to the committee, for helping us in selecting a set of good quality contributions.
The number of people that have been in touch with drugs is continuously increasing. Excessive intake of drugs becomes problematic when it turns into disorderly behaviors, such as addictions. In order to treat these disorderly behaviors, treatment plans often adhere to a one-size-fits-all approach with fixed and standardized steps. However, for effective treatment of disorderly behaviors it has been acknowledged that personalized treatment programs are necessary. The personality of people has been argued to be a factor that plays an important role in setting up effective treatment plans. In this work we explored the predictability of people’s personality traits based on their drug consumption profile. Based on self-reported consumption frequencies of “abusable psychoactive drugs,” we found among 1878 respondents that drug consumption profiles can be used to predict people’s personality traits. The prediction of personality traits can be used to circumvent intruding questionnaires and to implicitly create personalized treatment programs.
Finding the right university to study is still a challenge for many people due to the large number of universities worldwide. Although there exist a number of global university rankings, they provide non# personalized rankings as one-size-fits-all solution. This becomes an issue since different people may have different preferences and considerations in mind, when choosing the university to study.
This paper addresses this problem and presents a Recommender System to generate a personalized ranking list based on users particular preferences. The system is capable of eliciting users preferences, provided as ratings for universities, building predictive models on the preference data, and generating a personalized university ranking list that is tailored to the particular preferences and needs of the users.
We performed two sets of experiments. First, we conducted an offline experiment using a dataset of user preferences, collected by the early version of our system. This allowed us to cross-validate and compare different recommender algorithms and choose the most accurate recommender algorithm that can better suit the particular problem at hand. We integrated the chosen algorithm in the final implementation of our system. As the follow-up, we performed a user study in order to analyze whether or not the final version of our system is usable from the perception of users. The results showed that the system has scored well above the benchmark and users assessed it as “good” in term of usability.
A User Training Error based Correction Approach combined with the Synthetic Coordinate Recommender System
We propose a Synthetic Coordinate Recommendation system using a user Training Error based Correction approach (SCoR-UTEC). Synthetic Euclidean coordinates are assigned by SCoR system to users and items, so that, when the system converges, the distance between a user and an item provides an accurate prediction of the user’s preference for that item. In this paper, after the SCoR execution, we introduce a stage called UTEC to correct the SCoR recommendations taking into account the error on the training set between users and items and their proximity in the synthetic Euclidean space of SCoR. UTEC is also applicable on any model-based recommender system with positive training error like SCoR. The experimental results demonstrate the efficiency and high performance of the proposed second stage on real world datasets.
The interaction between human and machine is supposed to become more and more intuitive and natural. One prerequisite to achieve this is the ability of a spoken dialogue system to react flexibly to the individual requirements of a user, e.g., by means of adaptive voice output. In this context, a user’s personality, which is directly reflected in his or her language style, represents a valuable framework to attribute linguistic differences in spoken language and design user-adaptive voice output. The need for such efficient, personalized communication becomes particularly safety-relevant in situations where speech-based interaction is required to be performed in parallel with a prioritized primary task, e.g., when driving a car. For this purpose, we performed data collection in a driving simulator and investigated user speech during the primary task of driving with a focus on the syntactic level. Our results revealed five syntactic complexity factors to be considered in the generation of voice output, which indicated significant differences in the spoken language of six different personality type clusters. This analysis serves as a basis for future work towards user and situation-adaptive voice output in dual-task environments.
Machine generated personalization is increasingly used in online systems. Personalization is intended to provide users with relevant content, products, and solutions that address their respective needs and preferences. However, users are becoming increasingly vulnerable to online manipulation due to algorithmic advancements and lack of transparency. Such manipulation decreases users’ levels of trust, autonomy, and satisfaction concerning the systems with which they interact. Increasing transparency is an important goal for personalization based systems and system designers benefit from guidance in implementing transparency in their systems.
In this work we combine insights from technology ethics and computer science to generate a list of transparency best practices for machine generated personalization. We further develop a checklist to be used by designers to evaluate and increase the transparency of their algorithmic systems. Adopting a designer perspective, we apply the checklist to prominent online services and discuss its advantages and shortcomings. We encourage researchers to adopt the checklist and work towards a consensus-based tool for measuring transparency in the personalization community.
In the social media domain user-to-user recommendation is an important factor to suggest new content and to strengthen the user social circle. In this paper we investigate how to improve user-to-user recommendation exploiting a user similarity metric computed analysing the photos shared by users on their Instagram profile. We consider in particular users with an established credibility and audience, the so called “influencers”. The main idea is that if two influencers publish photos containing similar content it is more likely that they share the same interests and are similar. Moreover, users that follow other users sharing related content are also more similar. Similarity between influencers’ photo collections is estimated through neural network embeddings, using a network trained to classify photo collections in categories of interest. An hybrid recommendation approach, which combines collaborative filtering and results from this compact representation of visual content of photo collections, is proposed. Experiments on a large dataset of ~4.8M Instagram users show how our visual approach enhances the performance of a user-to-user recommender with respect to a baseline recommendation algorithm based on collaborative filtering.
Previous work has shown that gaze behavior can vary not only as a function of stimuli and task but also as a function of the observer. Thereby stable inter-individual differences have been shown as well as changes in gaze behavior due to an observer’s internal state. How such intra-individual differences interact with inter-individual variations in gaze behavior has not been studied explicitly before. Here, we tackle this question by analyzing fixation statistics and scan path representations in a visual comparison task inducing different observer states. Results show that reliable inter-individual differences exist in fixations statistics and for scan path representations that allow for modeling a longer temporal horizon. Changes in observer state affected the data, but a substantial amount of variance between observers remained stable. We discuss the results in the light of making personalized gaze-based applications more robust against changes in context.
The Impact of Adaptation Based on Students’ Dyslexia Type: An Empirical Evaluation of Students’ Satisfaction
Research based on dyslexia type adaptation has received little attention from researchers. What work there is, is often marked by a lack of well-designed and rigourous experimental evaluation in terms of its effectiveness, in general, and specifically of the satisfaction of students with dyslexia with their learning. A high level of student satisfaction is a significant indicator of a system’s effectiveness, where it improves students’ experience and motivation and, therefore, enhances their learning process. This paper aims to investigate how adaptation based on a student’s dyslexia type affects their satisfaction. An adaptive e-learning system that adapts learning material based on dyslexia type was implemented. A controlled experiment was conducted with 40 students with dyslexia to evaluate their satisfaction level with the e-learning system. The results show that students were more engaged and satisfied with their learning experience when the learning content matched their dyslexia type. This indicates that adapting learning material according to dyslexia type can improve the satisfaction of these students and increase their motivation and experience.
Privacy education is becoming increasingly important these days, especially for young people. While several e-learning platforms for privacy awareness training have been implemented, they are typically based on traditional learning techniques. More specifically, they do not allow students to cooperate and share knowledge in order to achieve mutual benefits and improve learning outcomes. In this paper, we propose a collaborative e-learning platform for privacy education, which can provide a stable personalized partner selection mechanism using game theory. The proposed mechanism guarantees a stable student-student matching according to students’ preferences (behavior and/or knowledge). Experimental results show the effectiveness of the proposed model in terms of achieving students’ satisfaction compared to other existing partner selection models. The results also suggest that the proposed approach allows us to achieve better learning outcomes in privacy education.
In the future, spoken dialogue systems will have to deal with more complex user utterances and should react in an intuitive, comprehensible way by adapting to the user, the situation and the context. In rapidly changing situations, like talking to a highly automated car, it is highly relevant to react adequately to quick urgent interjections whether within one utterance or as interruptions of ongoing actions/dialogues. A first step is the detection of urgency in user utterances. Therefore, we developed a user study based on gamification simulating such short-term urgent situations. With this study, we collected data for a first analysis of features from the audio signal, which are promising for detecting urgent utterances. In the game “What is it?” participants had to find a symbol consisting of three characteristics from a set via speech. Their search was regularly interrupted by a time limited urgent task. The data obtained show that features only from the audio signal can be used to distinguish between urgent and non-urgent utterances. Further analysis reveals that certain features of the audio signal represent different phases of the data set better or worse. We distinguish, among other things, between the phases Transition and Decline, which represent the shift from non-urgent to urgent speech and vice versa. These shifts are recognizable and can occur in rapid change. We identified several classification methods to detect successfully urgent speech in each phase.
Getting to Know Your Neighbors (KYN). Explaining Item Similarity in Nearest Neighbors Collaborative Filtering Recommendations
The popular neighborhood-based Collaborative Filtering recommendation techniques are mostly characterized as black-box systems in which resulting outputs are not easy to interpret. In this work, our goal is to provide human-interpretable explanations of item-based collaborative filtering recommendations and to understand the underlying data distribution. We propose the Know Your Neighbors (KYN) algorithm – a novel model-agnostic approach to explaining similarity-based CF recommendations on both local and global levels. In this approach, a post-hoc explainer model is applied to reveal the most important descriptive features of items that explain the neighborhood function for two popular collaborative algorithms. Our algorithm is evaluated on two public recommendation datasets. The descriptions generated for both the datasets are consistent with our intuitive understanding of user behaviors, but they also reveal that each representation may be susceptible to different types of biases in the dataset.
We present an interactive user interface that allows digital marketing professionals to have real time access to insights from a back-end AI that predicts potential click-through rates of composed content based on similar past campaigns. We wanted to investigate the extent to which digital marketing professionals would find our system usable and useful and whether or not the advice our system generated would create content that had higher click through rates than content developed without the system’s advice. Our framework decomposes aspects of prior campaigns into features including image quality, memorability, and placement; and text readability, formality and sentiment. We show our algorithm has high predictive value on a historical test set (AUC .80); that digital marketing professionals give the system an overall high satisfaction rating and that, using the advice of the AI agent, we can generate content that creates up to 22% click-through rate lift on a 700 A/B preference tasks given to master workers on AMT.
The rise in international migration over the past decades has given more audience to this crucial issue of human life. According to reports by United Nations, more than 243 million people live in a country that is not their place of birth. People decide to immigrate, based on a range of reasons, and choose the country of destination with the hope to begin a new life. However, such a risky decision may not necessarily lead to an improvement of life and in many cases could result in complete dissatisfaction of the emigrating person, and in the extreme cases, cause human catastrophe.
Recommender Systems (RSs) are tools that could mitigate this problem by supporting the people in their decision making process. RSs can interact with the people who are willing to immigrate and acquire certain information about their preferences on potential destinations. Accordingly, RSs can build predictive models based on the acquired data and offer suggestions on where could be a better match for the specific preferences and constraints of people.
This work is an attempt to build a RS that can be used in order to receive personalized recommendation of countries. The system is capable of eliciting preferences of users in the form of ratings, learning from the preferences, and intelligently generating a personalized ranking list of countries for every target user. We have conducted a user study in order to evaluate the quality of the recommendation, measured in terms of accuracy, diversity, novelty, satisfaction, and capability to understand the particular preferences of different users. The results were promising and indicated the potentials of the generating personalized recommendations in this less-explored domain.
In the Social Internet of Things (SIoT), the connected objects operate autonomously to request and provide information and services to end users. Following concepts and aspects from human social networks, the objects interact with each other, and over time develop trustworthy relationships. By mitigating security and privacy concerns, the benefit to end users is more effective and trustworthy services. In this work, we design a recommender system over SIoT. The recommender takes advantage of the social dynamics that drive the behavior and interactions of autonomous objects as they attempt to discover and return the best possible result. The main aim is to facilitate the optimum pairing of objects so as to enable effective recommendations.
In social networks, the phenomena of homophily and influence explain the fact that friends tend to be similar. Social-based recommenders exploit this observation by incorporating the social structure in collaborative filtering techniques. In practice, these recommenders tend to make friends appear more similar compared to non-socially aware techniques. Various proposals have demonstrated the benefit of incorporating social connections. But at what cost? In this work, we show that there exist users that are mistreated in social recommenders. Specifically, their individual preferences are suppressed more compared to other users in their social circle. We seek to identify who they are and develop techniques that protect them, without severely affecting the effectiveness of the recommender.
Children turn to online search tools to complete classroom-related assignments. Unfortunately, these tools are rarely explicitly tailored to meet children’s needs and skills. We seek to design a Search Agent (SA) for children in the context of education; one that can foster natural, effective and rewarding interactions in the educational context. To do so, we build on previous explorations of children’s interactions with search tools and study how they imagine a SA that can help with assignments required online quests: What are the pragmatic and hedonistic qualities of such SA? Also, do previous experience and familiarity with search technology affect children’s preferences? We involved in our study children ages 10-11 and inquired on factors that define their overall experience with a SA. We outline the steps of our research and the emerging lessons learned so that they can guide the design of a SA.
Recent advances in readability assessment have lead to the introduction of multilingual strategies that can predict the reading-level of a text regardless of its language. These strategies, however, tend to be limited to just operating in different languages rather than taking any explicit advantage of the multilingual corpora they utilize. In this manuscript, we discuss the results of the in-depth empirical analysis we conducted to assess the language transfer capabilities of four different strategies for readability assessment with increasing multilingual power. Results showcase that transfer learning is a valid option for improving the performance of readability assessment, particularly in the case of typologically-similar languages and when training corpora availability is limited.
We demonstrate a novel personalized, multifaceted entity-relation graph visualization. Since entities we are linked to are part of our lives (and profile) and help to understand who we are and what are we interested we enable users to explore entities they are linked to in a specific context. For that purpose, we adapt the typed entity-relation graph (profile) concept. In the context of an academic conference, we allow scholars to explore a graph of related entities and a word cloud representing the links, providing the user a comprehensive, compact and structured overview about the explored scholar. In this demonstration, the users, as case study are the participants of UMAP’20, will be asked to explore an entity relation graph (profile) of a given participant (who accepted that his profile be presented to others for the matter of this research), and then based on this process, to give feedback about each of the visualization elements and the overall experience. Based on the informative feedbacks elicited, we will be able to evaluate to what extent this visualization would help in giving comprehensive overview in the exploration of any given scholar profile.
“Filter bubbles,” a phenomenon in which users become caught in an information space with low diversity, can have various negative effects. Several tools have been created to monitor the users’ actions to make them aware of their own filter bubbles, but these tools have disadvantages (e.g., infringement on privacy). We propose a standalone demo that does not require any personal data. It emulates Facebook, a well-known and popular social network. We demonstrate how each user interaction may affect the selection of subsequent posts, sometimes resulting in the creation of a ‘filter bubble.’ The administrator (researcher) can tailor the demo for any context, changing the topics and points of view used in the demo. Data collection via surveys before and after the demo is facilitated so that the demo can be used for research, in addition to education.
SESSION: Session 2: Adaptation and Personalization in Computer Science Education (APCSE 2020)
We present preliminaries ideas and a prototype implementation of a collaborative environment based on gamification aimed at teaching coding and software life cycle principles such as design, development, and testing to beginners. As a guide example, we consider a two player game in which each player can dynamically modify its strategy via a simple rule-based language. In this setting the player console plays the role of usual coding tools. However, the sprite controlled by each player can be viewed as a sort of reactive module that interacts with the other players within the chosen game. The game starts with a default strategy. During a game play, players can then adapt their strategy by updating the rules that govern the behaviour of their sprites. The prototype is designed on top of the Python arcade library extended with a communication middleware built on top of asyncio and zeromq to run the environment on a set of remote machines.
We conducted modeling of student learning status and tasks in abacus-based calculation by utilizing matrix factorization on student-generated learning data. The matrix consisted of performance scores on student-task pairs. We decomposed the raw matrix into two matrices, yielding the distributed representations of each student and each task. Prediction of student performance using those decomposed matrices achieved better results than baseline models that use the student biases and task biases. This suggests matrix factorization successfully extracted the interaction of multiple latent features of each task and each student’s learning status in abacus-based calculation.
We present a novel teamwork model based on rogaining that since 2018 we have successfully applied in several events of our University. Our model, named Slow Rogaining, exploits gamification principles in order to engage participants, divided in teams, in a rogaine built on top of compelling activities related to computer science. The term slow (inspired by the slow food philosophy) is used to emphasize the difference with standard outdoor rogaines. Indeed, Slow Rogaining is mainly designed as an indoor navigational activity with a limited time duration (3-4 hours). In the paper, we provide a multi-layer analysis of our model in terms of soft skill development for participants and mentors, orienteering goals, computer science disciplinary teaching goals. We will also discuss the benefits of introducing a technology support activities developed according to this model both in practice (on the field) and in theory (in the design phase).
We present our experience as educators of small groups of students interested in cybersecurity and ethical hacking. Since 2018 we have been involved in a national cybersecurity training program whose primary goal is bringing talented young students to this field and lessen the gap between the available workforce and what the market demands. The training model exploits gamification principles, and our students apply their knowledge and skills in online competitions, playing in virtual arenas. These provide lawful environments to experiment with cybersecurity vulnerabilities, attacks, and defenses freely and legally. In this paper we first describe the national program we are involved in, and then we detail the activities taken at our university, with a special emphasis on this year edition that, due to the COVID-19 restrictions, is currently running entirely online.
Personalized recommendation of learning content is one of the most frequently cited benefits of personalized online learning. It is expected that with personalized content recommendation students will be able to build their own unique and optimal learning paths and to achieve course goals in the most optimal way. However, in many practical cases students search for learning content not to expand their knowledge, but to address problems encountered in the learning process, such as failures to solve a problem. In these cases, students could be better assisted by remedial recommendations focused on content that could help in resolving current problems. This paper presents a transparent and explainable interface for remedial recommendations in an online programming practice system. The interface was implemented to support SQL programming practice and evaluated in the context of a large database course. The paper summarizes the insights obtained from the study and discusses future work on remedial recommendations.
In this paper, we present our experience of using the Team Based Learning (TBL) methodology and the Sonic Pi language in an introductory programming course. Sonic Pi is a code-based musical creation and performance tool, also widely used for computer science (CS) educational purposes, since every musical concept corresponds to a notion of computer programming. This aspect makes it extremely effective in adapting to different learning objectives and environments, and in tuning the topics with learners interests and attitudes. Specifically, we designed two TBL activities aimed at first-year undergraduate students in CS, the first focusing on anticipating a new advanced CS topic and the second on a learning objective of the Introduction to Programming Course.
In this paper we discuss design principles, educational goals, implementation issues and practical results of the Codinj laboratory organized at the Genova Science Festival to promote computational thinking to a broad audience via a novel combination of the Pocket Code App and the Scratch 3.0 tool. Inspired by the original Jumanji movie, the entire activity is based on the idea of using coding to help participants in being part, conceptually and physically, of a video game. From an educational perspective, the proposed exercises are aimed at introducing, with a clear goal in mind, basic computational thinking concepts such as the notion of state, instructions, executions, events and concurrency.
In this paper we present a laboratory activity aimed at teaching fundamental aspects in programming Internet of Things systems while exploring edge, cloud and middleware components. The whole activity is built on top of visual tools based on the Flow Programming paradigm Node-red. Node-red provides an abstract view of the underlying communication network and facilitates the integration of different types of endpoints. Publish subscribe architectures are also very useful to simplify communication and data exchange in the resulting IoT system. The proposed laboratory is structured in order support and stimulate teamwork activities. Furthermore, Node-red supports live coding on both local and remote machines. A cloud component provided by Ubidots Education has been integrated in the development process in order to get confidence with basic elements of cloud services.
In this study, we propose a proof of concept of a Virtual Reality system able to provide a high degree of engagement and activation to multiple users. Specifically, we designed a game for a collaborative Team-Building experience, we tested the system during a slow rogaining activity and we assessed the participants’ appreciation level. Our results show that several persons can be entertained and are able to maintain a high activation, even though only one of them is actually immersed inside a VR experience wearing a Head Mounted Display
In this paper we present the tangible coding activity we proposed as an interactive laboratory at the Festival della Scienza in Genova, Italy, in 2018 and 2019. Our goal was to disseminate basic principles of coding in a fun and accessible way, reaching young children. In the activity, each participant was given a small set of 3D shapes — the language — and very simple rules on how to use them to build a tangible sentence as an ordered sequence. We designed an Artificial Intelligence module that given an image of the shapes sequence, is able to identify and recognize the shapes, associating them with a label, and finally produce a fantasy sentence or a small story. Overall, more than 1000 participants attended the laboratory, confirming the potential of the activity and highlighting many possible future improvements.
SESSION: Session 3: Adaptive and Personalized Privacy and Security (APPS 2020)
Privacy Dashboards: The Impact of the Type of Personal Data and User Control on Trust and Perceived Risk
The strength and memorability of picture passwords, that corelate with user’s visual behavior, not only diverse between their cognitive styles but also within the culture predispositions. As part of the works investigating this topic, this paper reports a case study investigating visual behavior of Chinese students (N=36) on establishing graphical password. To examine cognitive and visual behavior, we have provided our participants with two sets of images: the first set illustrated images highly related to their daily-life experiences (culture-internal), while the second set illustrated images presenting daily-life experiences in a different sociocultural context (culture-external). Our results have indicated that users spent more time exploring culture-internal, rather than culture-external photos before they made graphical password selection. Different content of our pictures also affected the percentage of password gestures chosen by participants with culture-external image type falling higher on hot-spots segments. Our study sustains previous findings that promote individual approach towards users’ sociocultural experiences in the design of personalized graphical password schemes.
Although user profiles are indicative of the user’s interests, they can be incomplete to reflect all the user’s interests and in more times it is needed to use a group of personalized user profiles to re-rank the returned results by search engines. One of the disadvantages of the personalization based on the user profile is that it is built by considering only the documents that the user has clicked. The set of the clicked documents might be sparse for some users. Data sparsity can be resolved by backing off to the group of users with similar behavior to the user. In this paper, we present a group-based personalization model using topical user-profiles and compare the result of the proposed ranking methods based on the group and user profiles. To cluster the groups of users we use the Kmeans clustering algorithm and the similarity between users is measured by symmetric Kullback-Leibler divergence between their latent topic distributions. Using the proposed group-based personalization model, we can improve the ranking result using group-based profiles and solve the cold start problem of users without history. In the issues related to privacy concerns, group profiles are also more secure than user profiles because both the computation and the storage of the user information are done as a group of users. The result reveals that the group-based personalization using topical user profile improves the Mean Reciprocal Rank and the Normalized Discounted Cumulative Gain by 7% and 6% respectively in all short, long and session term profiles while the short term user profile obtains more effective than the others profiles.
Patient-centric adaptation of audiological preferences across different contexts is a challenging task, as traditional clinical measurements of audibility do not reflect the cognitive perception of speech nor binaural loudness of sounds in different contexts. Smartphone-based machine learning personalization systems have the potential to address this issue in real-world listening scenarios, however, the necessary training datasets are not currently available. As hearing healthcare medical data is of a highly private nature, a framework is proposed, combining federated learning (FL) and secret sharing in the context of hearing aids with the goal of training models locally while preserving the individual user’s privacy. We demonstrate an application of such a system with a simplified domain defined by the MNIST digit classification task.
There is a recognised move towards more personalised health with citizens at the centre of healthcare provision. In particular, there is an emphasis on the right of citizens to decide who, why and when, should have access to their medical records. The EU project SERUMS is developing a tool-chain for the secure access of distributed medical information preserving the privacy levels imposed by GDPR, national and/or organisational regulations. We propose a user-centred approach to demonstrate how technologies can converge to enable doctors and patients to interact with integrated healthcare records. In addition, it will allow us to evaluate and evolve our tool-chain.
This study presents an optimal differential privacy framework for learning of distributed deep models. The deep models, consisting of a nested composition of mappings, are learned analytically in a private setting using variational optimization methodology. An optimal (ε,δ)-differentially private noise adding mechanism is used and the effect of added data noise on the utility is alleviated using a rule-based fuzzy system. The private local data is separated from globally shared data through a privacy-wall and a fuzzy model is used to aggregate robustly the local deep fuzzy models for building the global model.
Advancements in computer and communication technology enabled the rapid growth of e-health services, which can nowadays provide various electronic methods (e.g., obtaining online consent, exchanging health data). In this context, user authentication is an essential security task within modern healthcare systems performed daily by millions of patients across the globe. Nevertheless, most e-health service providers often employ traditional text-based password solutions which result in increased cognitive load and often lead to poor usability and security. In this paper, we present the design and development of a patient-centric user authentication system, which offers a flexible, personalized and multi-factor user authentication solution to patients. The suggested solution is currently being implemented and evaluated within the EU Serums project – Securing Medical Data in Smart Patient-Centric Healthcare Systems, which is a research project supported by the European Commission (EC) under the Horizon 2020 Framework Programme (H2020).
SESSION: Session 4: Workshop on Explainable User Models and Personalised Systems (ExUM 2020)
UMAP 2020 Workshop on Explainable User Models and Personalised Systems (ExUM) Chairs’ Welcome & Organization
The majority of existing research in the field of recommendation systems is aimed at optimizing accuracy metrics for given datasets, which leads to an algorithm-driven design of resulting solutions. Given a lack of understanding of the dataset characteristics and insufficient diversity of represented individuals, such approaches lead to amplifying the hidden data biases and existing disparities. In this research, we address this problem by proposing a Persona Prototyping approach that selects a set of the most representative user individuals to help in understanding the complex distribution of user interests and performing a proper qualitative evaluation of recommendation algorithms. A hierarchical density-based clustering technique is applied to distinguish diverse user groups and select their prototypes. Each of the selected representatives is presented in an easily understandable form of a textual user story describing the prototype behaviors, inspired by the concept of persona from the interaction design. We evaluated the diversity and representativeness of selected individuals and the results show that the proposed method is capable of identifying diverse interest archetypes and can be used to improve the qualitative analysis of recommendations and to test how well they respond to the diversity of user needs.
Towards Queryable User Profiles: Introducing Conversational Agents in a Platform for Holistic User Modeling
In this article we introduce the concept of queryable user profile, that is to say, a representation of the user that can be queried through natural language requests.
Such a representation allows the user to inspect the information that are encoded in her own profile and is supposed to: (i) make profiling and personalization processes more transparent and responsible; (ii) improve self-awareness and self-consciousness about personal data that are spread on the Web and personal devices.
To this end, we designed MyrrorBot, a conversational agent that is built on top of a platform for holistic user modeling called Myrror. Basically, the system supports two groups of intents: (i) natural language requests to inspect the information encoded in the profile; (ii) personalized access to online services, such as music, video, news, and food recommendation. In both the scenarios, every question is caught by the conversational agent that interprets the information need of the user and generates an appropriate answer that fulfills the request.
In the experimental evaluation, we investigated both users’ acceptance of the system as well as the time required to access to the information encoded in the profile, and the results showed that our system allows to significantly improve the way people can access to personal information, thus confirming the validity of the intuition and paving the way for further development of our system.
Recommender systems have achieved considerable maturity and accuracy in recent years. However, the rationale behind recommendations mostly remains opaque. Providing textual explanations based on user reviews may increase users’ perception of transparency and, by that, overall system satisfaction. However, little is known about how these explanations can be effectively and efficiently presented to the user. In the following paper, we present an empirical study conducted in the domain of hotels to investigate the effect of different textual explanation types on, among others, perceived system transparency and trustworthiness, as well as the overall assessment of explanation quality. The explanations presented to participants follow an argument-based design, which we propose to provide a rationale to support a recommendation in a structured way. Our results show that people prefer explanations that include an aggregation using percentages of other users’ opinions, over explanations that only include a brief summary of opinions. The results additionally indicate that user characteristics such as social awareness may influence the perception of explanation quality.
On the Use of Feature-based Collaborative Explanations: An Empirical Comparison of Explanation Styles
A Look Inside the Black-Box: Towards the Interpretability of Conditioned Variational Autoencoder for Collaborative Filtering
Deep learning-based recommender systems are nowadays defining the state-of-the-art. Unfortunately, their hard interpretability restrains their application in scenarios in which explainability is required/desirable. Many efforts have been devoted to injecting explainable information inside deep models. However, there is still a lot of work that needs to be done to fill this gap. In this paper, we take a step in this direction by providing an intuitive interpretation of the inner representation of a conditioned variational autoencoder (C-VAE) for collaborative filtering. The interpretation is visually performed by plotting the principal components of the latent space learned by the model on MovieLens. We show that in the latent space conditions on correlated genres map users in close clusters. This characteristic enables the model to be used for profiling purposes.
Explanations are a key concept of public-service remits. For their linear program, public broadcasters regularly report on how they reflect diversity and balance in their program. For personalized public media, however, explanations must be different, as not one, but thousands of individual playouts have to be explained. We propose an approach how to design explanations for content-based radio recommendations based on word embedding approaches and evaluate the acceptance of our explanations with interviews of test listeners. As a result, we reveal a paradox-test listeners appreciate to have explanations, but at the same time do not intend to use the explanation feature often.
SESSION: Session 5: Fairness in User Modeling, Adaptation and Personalization (FairUMAP 2020)
Online media is increasingly selected and filtered by recommendation engines. YouTube is one of the most significant sources of socially-generated information, and as such its recommendation policies are important to understand. Because of YouTube’s revenue model, the nature of its recommendation policies is fairly opaque. Hence, we present an empirical exploration of the nature of YouTube recommendations, concentrating on socially-impactful dimensions. First, we confirm that YouTube’s recommendations generally “lead away” from reliable information sources, with a tendency to direct users over time toward video channels exposing extreme and unscientific viewpoints. Second, we show that there is a fundamental tension between user privacy and extreme recommendations. We show that in general, users who seek privacy by keeping personal information hidden, receive much more extreme and unreliable recommendations from the YouTube engine. This drawback of user privacy in the presence of recommender systems has not been widely appreciated. We quantify this effect along various dimensions, including its dynamics in time, and show that the tradeoff between privacy and unreliability of recommendations is generally pervasive in the YouTube recommendation process.
Vision-based cognitive services (CogS) have become crucial in a wide range of applications, from real-time security and social networks to smartphone applications. Many services focus on analyzing people images. When it comes to facial analysis, these services can be misleading or even inaccurate, raising ethical concerns such as the amplification of social stereotypes. We analyzed popular Image Tagging CogS that infer emotion from a person’s face, considering whether they perpetuate racial and gender stereotypes concerning emotion. By comparing both CogS and Human-generated descriptions on a set of controlled images, we highlight the need for transparency and fairness in CogS. In particular, we document evidence that CogS may actually be more likely than crowdworkers to perpetuate the stereotype of the “angry black man” and often attribute black race individuals with “emotions of hostility”.
Fairness concerns about algorithmic decision-making systems have been mainly focused on the outputs (e.g., the accuracy of a classifier across individuals or groups). However, one may additionally be concerned with fairness in the inputs. In this paper, we propose and formulate two properties regarding the inputs of (features used by) a classifier. In particular, we claim that fair privacy (whether individuals are all asked to reveal the same information) and need-to-know (whether users are only asked for the minimal information required for the task at hand) are desirable properties of a decision system. We explore the interaction between these properties and fairness in the outputs (fair prediction accuracy). We show that for an optimal classifier these three properties are in general incompatible, and we explain what common properties of data make them incompatible. Finally we provide an algorithm to verify if the trade-off between the three properties exists in a given dataset, and use the algorithm to show that this trade-off is common in real data.
With increasing diversity in the labor market as well as the work force, employers receive resumes from an increasingly diverse population. However, studies and field experiments have confirmed the presence of bias in the labor market based on gender, race, and ethnicity. Many employers use automated resume screening to filter the many possible matches. Depending on how the automated screening algorithm is trained it can potentially exhibit bias towards a particular population by favoring certain socio-linguistic characteristics. The resume writing style and socio-linguistics are a potential source of bias as they correlate with protected characteristics such as ethnicity. A biased dataset is often translated into biased AI algorithms and de-biasing algorithms are being contemplated. In this work, we study the effects of socio-linguistic bias on resume to job description matching algorithms. We develop a simple technique, called fair-tf-idf, to match resumes with job descriptions in a fair way by mitigating the socio-linguistic bias.
The growing inclusion of information and communication technologies in our everyday life sets the scene for the development of personalized public services. Their public character brings along challenges that have not necessarily been dealt with in commercial applications, especially in terms of optimizing for the common good which requires moving away from a purely personalized-oriented approach. In this paper, we claim that to address these challenges, we can learn from two best practices in the design of digital public services: participatory design and open data.
SESSION: Session 6:HAAPIE 2020: 5th International Workshop on Human Aspects in Adaptive and Personalized Interactive Environments
HAAPIE 2020: 5th International Workshop on Human Aspects in Adaptive and Personalized Interactive Environments Chairs’ Welcome
Since emotion detection mostly employs supervised machine learning, big labeled datasets are needed to train accurate detectors. Currently, there is a lack of the open datasets, especially in the domain of confusion detection on the web. In this paper, we introduce a confusion detection dataset comprising of two modalities – the mouse movements and the eye movements of the users. The dataset was gathered during a quantitative controlled user study with 60 participants. We chose a travel agency web application for the study, where we carefully designed six tasks reflecting the common behavior and the problems of the day-to-day users. In the paper, we also discuss the issue of labeling emotional data during the study and provide exploratory analysis of the dataset and insights into the confused users’ behavior.
Towards Personalisation for Learner Motivation in Healthcare: A Study on Using Learner Characteristics to Personalise Nudges in an e-Learning Context
Lifelong learning is a key requirement for anyone working in healthcare, but many healthcare professionals find it challenging to undertake learning activities during their daily tasks. Digital solutions such as e-learning have been proposed to encourage and support the self-management of learning activities. In order to enhance the effectiveness of e-learning provision, personalised interventions in the form of prompts or nudges can be used, but first we need to ascertain (i) what kind of nudges are effective in an e-learning scenario, and (ii) which learner characteristics will be useful for personalisation of such nudges. In this paper we report the results of a study among medical and healthcare students which looks at the relationships between users’ interests, demographics and psychological traits, and the perceived effectiveness of five choice architecture techniques implemented as textual nudges on an e-learning platform in the healthcare domain. We found that even without personalisation different nudges vary in effectiveness in this context, and that interest (and to lesser extent other user characteristics) influence the perceived effectiveness of nudges. We finish with a set of recommendations for nudge design in this domain.
Simulation Environment for Guiding the Design of Contextual Personalization Systems in the Context of Hearing Aids
Adjusting the settings of hearing aids in a clinic is challenging as the measured thresholds of audibility do not reflect many aspects of cognitive perception or the resulting differences in auditory preferences across different contexts. Online personalization systems have a potential to solve this problem, yet the lack of contextual user preference data constitutes a major obstacle in designing and implementing them. To address this challenge, we propose a simulation-based framework to inform and accelerate the development process of online contextual personalization systems in the context of hearing aids. We discuss how to model hearing aid users and context allowing partial observability, and propose how to generate plausible preference models using Gaussian Processes incorporating assumptions about the environment in a controlled way. Finally, on a simple example we demonstrate how an uncertainty-driven agent can efficiently learn from noisy user responses within the proposed framework. We believe that such simulated environments are vital for successful development of complex context-aware online recommender systems.
The added value of Social Internet of Things (SIoT) is constantly highlighted during the recent years. The idea is to exploit the social relationships among real-world smart heterogeneous objects to the benefit of their owners, through e.g., search and finding, dedicated services, tasks augmentation. In this paper we discuss a decentralized human-centered simulator, DANOS, that enhances objects’ profiles and their interaction behavior with intelligence, based on specific human aspects, i.e., personality traits. Preliminary results show that when objects travel with intelligence in the virtual space, they are able to locate faster similar objects, establishing stronger and more qualitative relationships, while at the same time minimizing the network complexity and load. Such results, increase the probability of discovering faster the information based on given intents and providing best-fit recommendations with fewer costs.
Lifelong Learning encompasses vast learning opportunities and MOOCs are a learning environment that can be up to the challenge if current modeling challenges are addressed. Studies have shown the importance of modeling the learner for a more personal and tailored learning experience in MOOC. Furthermore, Open Learner Models have proven their added value in facilitating learner’s follow-up and course content personalization. However, while modeling the learner’s knowledge is a common practice, modeling the learner’s psychological state is a relegated concern within the community. This is despite the myriad of scientific evidence backing up the importance and repercussion of the learner’s psychological state during and on the learning process.
Flow is a psychological state characterized by total immersion in a task and a state of optimal performance. Programmers often refer to it as “being in the zone”. It reliably correlates favorable learning metrics, such as motivation and engagement, among others. The aim of this paper is to propose a functional and technical architecture (comprising a Domain Model, a Flow Model, and an Open Learner Model for MOOC in a Lifelong Learning context) accounting for the learner’s Flow state. This work is dedicated to MOOC designers/providers, pedagogical engineers, psychology, and education researchers who meet difficulties to incorporate and account for the Flow psychological state in a MOOC.
Personalization can be seen as a positive bias towards each user. However, it also has negative consequences such as privacy loss as well as the filter bubble or echo chamber effect due to the feedback-loop that creates. In addition, the web system itself can bias the user interaction distorting the data used for personalization. In this presentation we discuss the interaction of these three elements: personalization, bias and privacy.
Modern Artificial Intelligence (AI) techniques, based on the statistical analysis of big volumes of data, are quickly gaining traction across various domains. Recommender Systems are a class of AI techniques that extract preference patterns from large traces of human behavior. Recommenders assist people in taking decisions that range from harmless everyday life dilemmas, e.g., what shoes to buy, to seemingly innocuous choices but with long-term, hidden consequences, e.g., what news article to read, up to more critical decisions, e.g., which person to hire.
As more and more aspects of our everyday lives are influenced by automated decisions made by recommender systems, it becomes natural to question whether these systems are trustworthy, particularly given the opaqueness and complexity of their internal workings. These questions are timely posed in the broader context of concerns regarding the societal and ethical implications of applying AI techniques, which have also brought about new regulations, like the EU’s “Right to Explanation”.
In this talk, we discuss techniques for increasing the user’s trust in the decisions of a recommender system, focusing on fairness aspects and explanation approaches. On the one hand, fairness means that the system exhibits certain desirable ethical traits, such as being non-discriminatory, diversity-aware, and bias-free. On the other hand, explanations provide human-understandable interpretations of the inner working of the system. Both mechanisms can be used in tandem to promote trust in the system. In addition, we investigate user trust from the standpoint of different stakeholders that potentially have varying levels of technical background and diverse needs.
SESSION: Session 7: Workshop on Personalized Access to Cultural Heritage: PATCH’20
Ethics Guidelines for Trustworthy AI advocate for AI technology that is, among other things, more inclusive. Explainable AI (XAI) aims at making state of the art opaque models more transparent, and defends AI-based outcomes endorsed with a rationale explanation, i.e., an explanation that has as target the non-technical users. XAI and Responsible AI principles defend the fact that the audience expertise should be included in the evaluation of explainable AI systems. However, AI has not yet reached all public and audiences, some of which may need it the most. One example of domain where accessibility has not much been influenced by the latest AI advances is cultural heritage. We propose including minorities as special user and evaluator of the latest XAI techniques. In order to define catalytic scenarios for collaboration and improved user experience, we pose some challenges and research questions yet to address by the latest AI models likely to be involved in such synergy.
A Multiple Perspective Account of Digital Curation for Cultural Heritage: Tasks, Disciplines and Institutions
Cultural heritage management is a multiple-perspective enterprise where several disciplines and practices contribute to successful dissemination and communication. Digital data in support of cultural heritage management are addressed by the digital curation process, which has been emerging to account for the diversity of disciplinary communities and cultural heritage organizations. Digital curation addresses the diversity of participating skills and practices by working on the relationship between the cultural heritage objects and their digital counterparts. In particular, the innumerable initiatives for providing access to cultural heritage data are ideally coordinated by digital curation and are part of the process since the beginning. However, some thorough reflections on its role and implementation in cultural heritage institutions yet lack. In this paper, we provide a survey of the digital curation process, by unpacking the component curatorial tasks, with the solutions that have been proposed in the literature and in the application projects to account for the multiple perspectives at hand.
The personalized fruition of cultural heritage, especially with the health crisis we are currently experiencing, poses challenges to the learning and educational processes that could benefit from the use of intelligent tools. By emphasizing the interactivity of the learning process, in particular, the recent Technology Enhanced Learning (TEL) methodologies can represent an enabling factor for the construction of the common knowledge. This paper pursues the idea of integrating different heterogeneous artificial intelligence techniques for the personalized administration of stimuli in a dynamic learning environment. With this in mind, this paper describes a system, called WikiTEL and its first instantiation to Cultural Heritage visits in the Matera city.
The richness of Cultural Heritage (CH) sites exposes tourists to an information overload which makes it difficult to efficiently select the items that they like and can practically visit within a tour.
Faceted information exploration has been proposed as a solution to analyze large sets of data. However, most works focus on the inspection of a single type of information, e.g., hotels or music. In contrast, CH items are heterogeneous: they include natural and artificial monuments and different types of artworks which might be visited within a single tour. Moreover, CH sites are often visited in group, thus raising the expectation that all the involved people share information and decisions about what to do.
In order to address this issue, we propose a map-based faceted exploration model that makes it possible to create custom, long-lasting maps representing a shared information space for user collaboration, and temporally project these maps on the basis of fine-grained filters which help users focus on items associated to short-term, specific interests. Our model supports the user in the organization and filtering of CH information on the basis of multiple perspectives related to the attributes of items. We propose graphical widgets to support interactive data visualization, faceted exploration, category-based information hiding and transparency of results at the same time. The widgets are based on the sunburst diagram, which compactly displays visualization criteria on data categories by showing facets and facet values in a circular structure.
Cultural Heritage exploration is interesting for the development of inclusive tourist guides because it exposes visitors to different types of challenges, from steering content recommendation to visitors’ interests and cognitive capabilities, to the suggestion of places that can be effectively reached and visited under different types of constraints: e.g., temporal and physical ones. In this work we are interested in the needs of people with Autism in order to support them in the exploration of a geographic area. Specifically, this paper presents a mobile tourist guide that we are developing to help people in visiting new places. The app is an evolution of PIUMA (Personalised Interactive Urban Maps for Autism), conceived to help autistic citizens in their everyday movements. It shows a map tailored to users with Autism Spectrum Disorder. In particular, it presents a personalized selection of safe Points of Interest, i.e., places that are, at the same time, interesting for the user and have “safe” characteristics from the sensory point of view, such as being quiet, scarcely crowded, or with smooth lights. In this paper, we present how we intend to extend PIUMA to support tourists.
Tracking museum visitors may provide useful insights about them, thus enabling curators and personnel to better manage the flows and the arrangement of the museum’s works and to develop a recommender system as well. Tracking of visits in a museum environment is an expensive task if performed without automatic tracking systems. For this reason, many automatic tracking systems are proposed in the research literature. However, some of them are expensive (e.g., systems based on light detection and ranging (LIDAR) technology), or require active collaboration from visitors (e.g., systems based on wearable devices). In this work, we propose a deep learning object detection approach to the problem of tracking visitors. The proposed system can accurately detect specific objects in videos, thus allowing for the careful measurement of the spatial and temporal movements of a visitor in a museum scenario. The system requires only off-the-shelf inexpensive devices and deep learning models for object detection and recognition.
This article presents a study of a ubiquitous learning application named YouuHE. It was developed to support Heritage Education experiences, enabling local citizens to create and share collective memories about historical places. Users can receive personalized content according to location and personal interest. This study lasted 20 weeks and the participants were all seniors who participated in a pre-existing coexistence group. The preliminary results identified 3 motivations aspects related to the learning process. In addition, the support offered by the application shows itself effective as a potential resource to create and share memories in this kind of ubiquitous heritage education experiences.
Motivational Principles and Personalisation Needs for Geo-Crowdsourced Intangible Cultural Heritage Mobile Applications
Whether it’s for altruistic reasons, personal gains or third party’s interests, users are influenced by different kinds of motivations when making use of mobile geo-crowdsourcing applications (geoCAs). These reasons, extrinsic and/or intrinsic, must be factored in when evaluating the use intention of these applications and how effective they are. A functional geoCA, particularly if designed for Volunteered Geographic Information (VGI), is the one that persuades and engages its users, by accounting for their diversity of needs across a period of time. This paper explores a number of proven and novel motivational factors destined for the preservation and collection of Intangible Cultural Heritage (ICH) through geoCAs. By providing an overview of personalisation research and digital behaviour interventions for geo-crowdsoured ICH, the paper examines the most relevant usability and trigger factors for different crowd users, supported by a range of technology-based principles. In addition, we present the case of StoryBee, a mobile geoCA designed for “crafting stories” by collecting and sharing users’ generated content based on their location and favourite places. We conclude with an open-ended discussion about the ongoing challenges and opportunities arising from the deployment of geoCAs for ICH.
In this short position paper we review two methodologies: persuasion and digital nudging and their potential implication for Cultural Heritage (CH) use. As they both involve personalization as a key component, so it seems appropriate for discussion in this workshop. Both seem to open new opportunities for better serving visitors to CH sites while at the same time pose ethical dilemmas.
Transcribing historical handwritten documents is a difficult task. One facet is that it is a very tedious task normally performed by experts. Some newer techniques rely on crowdsourcing of manual transcription. Crowdsourcing helps speeding up the transcription process, but it is still limited and brings with it new challenges.
Though crowdsourcing transcriptions can imply a repetitive task done by a large group of users, there is in fact room for personalization. This paper reports on insights gathered for future personalizations from the “Tikkoun Sofrim” project, that implements a framework for combining automatic handwritten text recognition with crowdsourcing for transcription of complete handwritten manuscripts. As a case study, the Hebrew “Midrash Tanhuma” manuscripts were selected