Demo and Late-Breaking Results
Group recommendations are an extension of ”single-user” personalized recommender systems (RS), where the final recommendations should comply with preferences of several group members. An important challenge in group RS is the problem of fairness, i.e., no user’s preferences should be largely ignored by the RS. Traditional strategies, such as ”least misery” or ”average rating”, tackle the problem of fairness, but they resolve it separately for each item. This may cause a systematic bias against some group members. In contrast, this paper considers both fairness and relevance as a rank-sensitive list property. We propose EP-FuzzDA algorithm that utilizes an optimization criterion encapsulating both fairness and relevance. In conducted experiments, EP-FuzzDA outperforms several state-of-the-art baselines. Another advantage of EP-FuzzDA is the capability to adjust on non-uniform importance of group members enabling e.g. to maintain the long-term fairness across several recommending sessions.
A Methodology for the Offline Evaluation of Recommender Systems in a User Interface with Multiple Carousels
Many video-on-demand and music streaming services provide the user with a page consisting of several recommendation lists, i.e., widgets or swipeable carousels, each built with a specific criterion (e.g., most recent, TV series, etc.). Finding efficient strategies to select which carousels to display is an active research topic of great industrial interest. In this setting, the overall quality of the recommendations of a new algorithm cannot be assessed by measuring solely its individual recommendation quality. Rather, it should be evaluated in a context where other recommendation lists are already available, to account for how they complement each other. This is not considered by traditional offline evaluation protocols. Hence, we propose an offline evaluation protocol for a carousel setting in which the recommendation quality of a model is measured by how much it improves upon that of an already available set of carousels. We report experiments on publicly available datasets on the movie domain and notice that under a carousel setting the ranking of the algorithms change. In particular, when a SLIM carousel is available, matrix factorization models tend to be preferred, while item-based models are penalized. We also propose to extend ranking metrics to the two-dimensional carousel layout in order to account for a known position bias, i.e., users will not explore the lists sequentially, but rather concentrate on the top-left corner of the screen.
Businesses often deploy recommenders to help users fulfil their needs and to provide a better user experience with their platform. However, they may also need to guide users towards a certain content depending on time, situation, and business needs. For example, giving a fair exposure opportunity to less popular items provides a healthier business model in which more suppliers can survive and thrive. In this work, we explore target-aware diversification as an approach to mitigate exposure bias in a calibrated way. Provided with a target ratio for each category’s exposure, we balance this objective with the relevance of the recommended items through two main approaches: 1) diversification based only on the system’s predefined target and 2) diversification based on system’s target while taking into account the user’s tolerance for diversity. We explore the effectiveness of our proposed models on a publicly available dataset. Experimental results show that our approach can systematically diversify the recommendations towards a pre-defined target while maintaining the relevance of the recommendations to a good extent. We also conclude that the trade-off between achieving the target and maintaining relevance has a close connection with the feasibility of the defined target given the previous users’ consumption.
The impact of Graphical User Interfaces (GUI) for recommender systems is a little explored area. Therefore, we conduct an empirical study in which we create, deploy, and evaluate seven different GUI variations. We use these variations to display 68.260 related-blog-post recommendations to 10.595 unique visitors of our blog. The study shows that the GUIs have a strong effect on the recommender systems’ performance, measured in click-through rate (CTR). The best performing GUI achieved a 66% higher CTR than the worst performing GUI (statist. significant with p<0.05). In other words, with a few days of work to develop different GUIs, a recommender-system operator could increase CTR notably – maybe even more than by tuning the recommendation algorithm. In analogy to the ‘unreasonable effectiveness of data’ discussion by Google and others, we conclude that the effectiveness of graphical user interfaces for recommender systems is equally ‘unreasonable’. Hence, the recommender system community should spend more time on researching GUIs for recommender systems. In addition, we conduct a survey and find that the ACM Recommender Systems Conference has a strong focus on algorithms – 81% of all short and full papers published in 2019 and 2020 relate to algorithm development, and none to GUIs for recommender systems. We also surveyed the recommender systems of 50 blogs. While most displayed a thumbnail (86%) and had a mouseover interaction (62%) other design elements were rare. Only few highlighted top recommendations (8%), displayed rankings or relevance scores (6%), or offered a ‘view more’ option (4%).
Nowadays, many companies are offering chatbots and voicebots to their customers. Despite much recent success in natural language processing and dialogue research, the communication between a human and a machine is still in its infancy. In this context, dialogue personalization could be a key to bridge some of the gap, making sense of users’ experiences, needs, interests and mental models when engaged in a conversation. On this line, we propose to automatically learn user’s features directly from the dialogue with the chatbot, in order to enable the adaptation of the response accordingly and thus improve the interaction with the user. In this paper, we focus on the user’s domain expertise and, assuming that expertise affects linguistic features of the language, we propose a vocabulary-centered model joint with a Deep Learning method for the automatic classification of the users expertise at word- and message-level. An experimentation over 5000 real messages taken from a telco commercial chatbot carried to high accuracy scores, demonstrating the feasibility of the proposed task and paving the way for novel user-aware applications.
Wearable System for Personalized and Privacy-preserving Egocentric Visual Context Detection using On-device Deep Learning
Wearable egocentric visual context detection raises privacy concerns and is rarely personalized or on-device. We created a wearable system, called PAL, with on-device deep learning so that the user images do not have to be sent to the cloud for processing, and can be processed on-device in a real-time, offline, and privacy-preserving manner. PAL enables human-in-the-loop context labeling using wearable audio input/output and a mobile/web application. PAL uses on-device deep learning models for object and face detection, low-shot custom face recognition (~1 training image per person), low-shot custom context recognition (e.g., brushing teeth, ~10 training images per context), and custom context clustering for active learning. We tested PAL with 4 participants, 2 days each, and obtained ~1000 in-the-wild images. The participants found PAL easy-to-use and each model had gt80% accuracy. Thus, PAL supports wearable, personalized, and privacy-preserving egocentric visual context detection using human-in-the-loop, low-shot, and on-device deep learning.
Fairness is one of the crucial aspects of modern Recommender Systems which has recently drawn substantial attention from the community. Many recent works have addressed this aspect by studying the fairness of the recommendation through different forms of evaluation methodologies and metrics. However, the majority of these works have mainly concentrated on the recommendation algorithms and hence measured the fairness from the algorithmic viewpoint. While such viewpoint may still play an important role, it does not necessarily project a comprehensive picture of how the users may perceive the overall fairness of a recommender system.
This paper extends the prior works and goes beyond the algorithmic fairness in recommender systems by highlighting the non-algorithmic viewpoint on the fairness in these systems. The paper proposes an evaluation methodology that can be used to assess the fairness of a recommender system perceived by its users. We have adopted a well-known model and re-formulated it to suit the particular characteristics of the recommender systems, and accordingly, their corresponding users. Our proposed methodology can be used in order to elicit the feedback of the users, along with three important dimensions, i.e., Engagement, Representation, and Action & Expression. We have formed a set of survey questions that address the aforementioned dimensions, as a set of examples to assess the fairness in a recommender system.
This work explores an unsupervised approach for modelling players of a 2D cube puzzle game with the ultimate goal of customising the game for particular players based solely on their interaction data. To that end, user interactions when solving puzzles are coded as images. Then, a feature embedding is learned for each puzzle with a convolutional network trained to regress the players’ completion effort in terms of time and number of clicks. Next, the known bag-of-words technique is used at two levels. First, sets of puzzles are represented using the puzzle feature embeddings as the input space. Second, the resulting first-level histograms are used as input space for characterising players. As a result, new players can be characterised in terms of the resulting second-level histograms. Preliminary results indicate that the approach is effective for characterising players in terms of performance. It is also tentatively observed that other personal perceptions and preferences, beyond performance, are somehow implicitly captured from behavioural data.
This paper addresses the so-called New Item problem in video Recommender Systems, as part of Cold Start. New item problem occurs when a new item is added to the system catalog, and the recommender system has no or little data describing that item. This could cause the system to fail to meaningfully recommend the new item to the users. We propose a novel technique that can generate cold start recommendation by utilizing automatic visual tags, i.e., tags that are automatically annotated by deeply analyzing the content of the videos and detecting faces, objects, and even celebrities within the videos. The automatic visual tags do not need any human involvement and have been shown to be very effective in representing the video content. In order to evaluate our proposed technique, we have performed a set of experiments using a large dataset of videos. The results have shown that the automatically extracted visual tags can be incorporated into the cold start recommendation process and achieve superior results compared to the recommendation based on human-annotated tags.
Fighting in karate martial art requires great dexterity and ability of multiple physical and psychological factors. A key and fundamental skill for the success of this endeavor is the anticipation of the opponent's movements. Anticipation is an innate attribute, but it can also be worked on with training. Vision training, in the search for peripheral vision that allows the opponent's body to be monitored, is continuously worked on in martial arts training. Nonetheless, peripheral vision can be of use outside the martial arts domain, such as when driving (to be able to notice the environment) or reading (to increase reading speed). New technologies can bring new training methods that enhance peripheral vision by training motion anticipation. For this, a tool designed for karate training is used to evaluate if computer vision filters can facilitate motion anticipation performance in karate practice. Our research aims to model the interaction of the fighters in order to improve the body reading of the opponent as well as the dexterity in the anticipation of the attacks made by the opponent, aimed to build a personalized system for psychomotor learning. A user study is carried out to evaluate whether computer vision can be used to improve the prediction of punch attacks launched by the rival as well as the response time to them.
Many vacancy texts do not reach their full potential; vacancies are too generic, too specific, or biased. In this demo paper, we propose a research prototype that helps users to create a better vacancy text using AI techniques in the domain of Labor Market. The proposed vacancy text from the user is analysed using an function classifier, skill extractor, bias detector and skill overlap algorithm. The Competent database consisting of functions, descriptions and skills as well as an annotated set of Dutch vacancy texts are fed to the AI techniques. In a small user evaluation, we show that the prototype has potential to help users in their need to create better vacancy texts. In future work, we aim to test the tool with more participants and improve the different functionalities.
Human Interaction and Networking Transitions System (HINTS) for Social User Analytics and Modeling of Offline Team Group Interaction Information
In this paper, we focus on semantic interest modeling and present SIMT as a toolkit that harnesses the semantic information to effectively generate user interest models and compute their similarities. SIMT follows a mixed-method approach that combines unsupervised keyword extraction algorithms, knowledge bases, and word embedding techniques to address the semantic issues in the interest modeling task.
In this demo, we present an interactive recommender system that suggests recipes to participants of a weight loss programme. Nutritional constraints imposed by the programme serve as initial information to tackle the crucial cold start problem.
This paper presents design and development of COVID Pacman-R – a persuasive game to promote awareness and adoption of COVID-19 precautionary measures. The game simulates how COVID-19 spreads and uses various persuasive strategies to engage and educate user about the various precautionary measures and hence motivating the desired behaviour change. This game also creates an engaging experience using the principles of game-based learning.
ACM LPRS 2021, organized in conjunction with the 29th International Conference on User Modeling, Adaptation and Personalization, is the first edition of the workshop proudly held within UMAP Conference series. We present here the papers and their subjects. The full version of each article is available in the conference proceedings, the paper’s oral presentation is performed by the authors at the event.
The customization of learning pathways based on competency profiles and game-based learning are increasingly being adopted by education stakeholders because of their potential to maximize the effectiveness of instruction. However, actual learning can vary among individuals, particularly according to specific needs (e.g., L2 speakers learners, students with dyslexia, hearing-impaired children, etc.). In this article, we first present GamesHUB, the pedagogical games platform for primary school pupils, integrating the creation of playful and personalized learning paths. Secondly, we address the issue of adaptive learning, according to the different pupils’ profiles, through the integration of pedagogical resources based on adaptive pathways in the framework of the European project PEAPL. We discuss the way these pathways are elaborated to get close to didactic sequences’ frames that are proposed for the ordinary classroom.
Learning Path Recommender Systems (LPRS) are systems that make recommendations of learning resources to be consumed in a determined sequence. Such kind of recommendation is useful in scenarios where we need to personalize the learning especially when the students need to be guided faced an overwhelming amount of resources. LPRS are gaining more attention in the last years because of the popularity of e-learning, and such need to guide, motivate and engage students in big data scenarios. The systematic mapping proposed in this paper tries to understand how LPRS are done and how they are evaluated. Our findings suggest that the papers use mostly content-based algorithms and there is a lack of discussion on explainable and trustworthy LPRS.
Item response theory (I.R.T.) has been widely used in psychometrics, and the design of adaptive tests has recently been modified to be used in recommender systems of educational environments. This work aims to identify research that combines recommendation strategies for educational environments and item response theory; and find possible research spaces for improving recommendation quality and identifying learning trajectories. The study was carried out between 2015 and 2020 and found an increase in this type of research between 2017 and 2019. The resulting works were classified into three main axes, according to the analysis of the approaches used by the authors: Identification of learning trajectories, Prediction of learning levels (Proficiency, ability, others), and Recommendation of educational resources or learning tasks.
2nd Workshop on Adapted intEraction with SociAl Robots
Past research has shown that color can evoke an emotional response from people in various situations. Exploring this finding for robots, this paper presents a study with 175 participants who evaluated how eight robots designed with different colors would be viewed by society along a number of dimensions. The results indicated that subjects thought society would discriminate against a black or rainbow colorized robot more so than a robot portrayed as white. Further, the black colorized robot was thought to be stronger than a white or yellow colorized robot and subjects indicated that a red and black robot would be selected more often to commit an assault than the other robots. Additionally, the data revealed that a rainbow-colored robot was more likely to be selected as an elementary school teacher and personal friend but would receive more disrespect within society.
BRILLO (Bartending Robot for Interactive Long-Lasting Operations) project aims to deploy an autonomous robotic bartender that can naturally interact with customers in a real-world service point. In such a scenario, this work presents a multi-modal personalised recommendation method for increasing users’ engagement with a bartending robot. The personalised recommendation is adopted at two levels: one for the recommendation of the service (i.e. drinks), and another for selecting the human-robot interaction (HRI) modalities. The envisioned multi-modal personalised recommendation method enables the BRILLO robot to intelligently adapt its dialogues, pose and gestures, according to the user’s moods, attention behaviours, personal traits, and situational context (drink orders, group dynamics, etc.).
Social Robot for Health Check and Entertainment in Waiting Room: Child’s Engagement and Parent’s Involvement
To provide effective support in child health care, social robots’ behaviors should be well-tailored to the care context and situated user needs. This research focuses on a social robot (iPal) in the waiting room for a vaccination. In an experiment, children performed the health check and hereafter, to kill the time, a game, either with the robot or a tablet. Child’s behaviors and self-reports were recorded. The children seemed to be more positively engaged when interacting with the robot (higher motivation to play a game, higher interaction volume, more smiling during the health check, more gesture and/or verbal expressive behaviors, less mobile phone distraction). Further, their individual characteristics (like age and personality) and the social context (e.g., parent’s presence) affected children’s engagement (e.g., higher for young children) and parent’s involvement (e.g., higher with the tablet group, resulting in a higher percentage of answered questions during the health check). Here, we identified an interesting trade-off: the current robot supports child engagement (distracting from the stressful vaccination), but hinders the collaboration between parent and child. In future research, we aim to improve the collaboration support of the robot.
Socially assistive robots (SARs) can help meet the growing need for rehabilitation assistance; We argue that personalization of human-robot interactions in the context of rehabilitation is multi-layered, and needs to be frequently updated, as opposed to a single setting that might suffice in other contexts. In rehabilitation, personalization is not only important in order to establish engagement, but it is an essential component for the recovery of motor and cognitive abilities over a long-term interaction, and is an essential part of establishing trust between the patient and the SAR.
Robots designed to collaborate with human partners should be able to implicitly anticipate and adapt to their needs. To do so, robots need a framework supporting cognition and mutual understanding in social settings. We posit that even basic cognitive processes, such as learning, could benefit from considering the social and affective dimensions. In this direction, we propose a recently developed scenario, based on a competitive game, as a tool to steer the development of socially-aware competitive reinforcement learning (RL).
HAAPIE 2021: 6th International Workshop on Human Aspects in Adaptive and Personalized Interactive Environments
Nowadays, the profound digital transformation has upgraded the role of the computational system into an intelligent multidimensional communication medium that creates new opportunities, competencies, models and processes. The need for human-centered adaptation and personalization is even more recognizable since it can offer hybrid solutions that could adequately support the rising multi-purpose goals, needs, requirements, activities and interactions of users. The HAAPIE workshop1 embraces the essence of the “human-machine co-existence” and brings together researchers and practitioners from different disciplines to present and discuss a wide spectrum of related challenges, approaches and solutions. In this respect, the sixth edition of HAAPIE includes 3 long papers and 4 short papers.
Autonomous vehicles with conditional automation are said to be the next step in the development of self-driving cars. The human driver still performs a critical role in them, by taking over the control of the vehicle if prompted. As the technology is still facing pending challenges, the human drivers are also required to be able to detect and react in case of Autonomous Drive System (ADS) malfunctions. Within this context, in this work we argue that to assure safety during autonomous operation the user state should be measured all the time, which is intended to support a ”fallback ready state”. From an in-depth literature review, this article identifies the human factors involved in the aforementioned ”fallback ready state” that affect the personalization of human-vehicle interaction.
The business data analytics domain exhibits a particularly diversified and demanding field of interaction for the end-users. It entails complex tasks and actions expressed by multidimensional data visualization and exploration contents that users with different business roles, skills and experiences need to understand and make decisions so to meet their goals. Many times this engagement is proven to be overwhelming for professionals, highlighting the need for adaptive and personalized solutions that would consider their level of expertise towards an enhanced user experience and quality of outcomes. However, measuring adequately the perceived expertise of individuals using standardized means is still an open challenge in the community. As most of the current approaches employ participatory research design practices that are time consuming, costly, difficult to replicate or to produce comparable, unbiased, results for informed interpretations. Hence, this paper proposes a systematic alternative for capturing expertise through a Perceived Expertise Tool (PET) that is devised based on grounded theoretical perspectives and psychometric properties. Preliminary evaluation with 54 professionals in the data analytics domain showed the accepted internal consistency and validity of PET as well as its significant correlation with other affiliated theoretical and domain-specific concepts. Such findings may suggest a good basis for the standardized modeling of users’ perceived expertise that could lead to effective adaptation and personalization.
Data quality is a major issue when conducting studies in behavioral sciences. One of the possible threats to data quality in user modeling, in particular in questionnaire studies, is providing careless responses (CR). When responding carelessly, subjects do not pay sufficient attention to the questions and therefore compromise the interpretability of the responses. The aim of the current study was to gain a better understanding of the occurrence and identification of CR in Ecological Momentary Assessment (EMA) studies, where several questionnaires usually are administered daily to the participants over the course of some days, weeks or even months. For this purpose, explorative post-hoc analysis was conducted using the data of an existing EMA study in audiological research. Completion time, variance, skipped items, acquiescence bias and number of textboxes were analyzed as potential indicators for CR both inter- and intraindividually. Furthermore, consistency was examined using linear mixed models and scanning individual questionnaires. Results showed minimal systematic inconsistencies, indicating the absence of large-scale CR. However, this type of analysis might not be appropriate for identifying CR when only occurring occasionally. Moreover, the reliability of indicators of CR might be limited in EMA studies, as the indicators also vary over the course of the study and between different situations. Possibilities for future studies are discussed.
More and more aspects of our everyday lives are influenced by automated decisions made by systems that statistically analyze traces of our activities. It is thus natural to question whether such systems are trustworthy, particularly given the opaqueness and complexity of their internal workings. In this paper, we present our ongoing work towards a framework that aims to increase trust in machine-generated recommendations by combining ideas from three separate recent research directions, namely explainability, fairness and user interactive visualization. The goal is to enable different stakeholders, with potentially varying levels of background and diverse needs, to query, understand, and fix sources of distrust.
The news and stories that people find credible can influence the adoption of public health policies and determine their response to the pandemic, including the reception of controversial treatments. Although we trust people we know or admire, we should ask ourselves if they are sufficiently competent to provide a reliable opinion about medical treatments. In this paper, we try to identify professions, political views and psychological characteristics of Twitter users who shared information about controversial medical treatments by analysing the profile data of tweets published in English during the Covid-19 pandemic. We found that profile descriptions of Twitter users are very heterogeneous, but the major categories of users are Christians, devoted family members, fans of different music or political parties. We proposed an automatic approach for user classification.
This paper presents a qualitative study in which we evaluate the core parts of an adaptive algorithm for next-exercise selection in an e-learning system. The algorithm was previously constructed from a series of studies where participants played the role of a teacher and chose the difficulty of a subsequent exercise for a learner based on their performance, mental effort and self esteem. In this paper, we present these findings to real teachers to gain insights into whether the algorithm is effective and appropriate for future inclusion in an intelligent tutoring system. Overall, we found that teachers believed that the recommendations from the algorithm were appropriate.
The modern business environment is empowered by the abundant availability of data and plethora of sophisticated data analysis tools to identify and quickly address market needs. While these tools have evolved significantly during the last years, offering trailblazing data exploration experiences with stunning multi-modal visualizations, they mistreat the importance of individualized, user-centred delivery of information/insights. As a result, users may require much more effort and time to reach decisions that have implications on both the short-term and long-term success of sustainability of an organization. This paper highlights the need for user-centred/persona-driven data exploration through adaptive data visualizations and personalized support to an end-to-end business process. It proposes an extended human-centred persona and discusses preliminary exploratory results in relation to the formulation of the contextual characteristics of a business environment, i.e., business tasks, visualizations and data.
Recommender systems based on collaborative filtering suffer from the problems of data sparsity and cold start. Trust-aware recommender systems exploit human trust relationships to improve recommendation performance. In this talk, I will briefly summarize the state-of-the-art methods along this line of research and highlight three representative works from our group. I will also point out several remaining challenges and interesting future directions.
ACM PATCH 2021, organized in conjunction with the 29th International Conference on User Modeling, Adaptation and Personalization, is the meeting point between researchers and practitioners of personalization in cultural heritage, aiming to enhance the user experience in digital and physical Cultural Heritage sites. The PATCH workshops started in 2007 and they are typically held in conjunction with UMAP, IUI and recently AVI Conference series. This paper summarizes the main ideas addressed in the articles accepted for presentation at PATCH 2021 and for publication in the workshop proceedings.
Participatory Monitoring in Cultural Heritage Conservation: Case Study: The Landscape zone of the Bisotun World Heritage Site
Community participation in cultural heritage conservation has been a concern since the Venice Charter (1964) so far. This approach has also been highlighted in the World Heritage documents. In this case, it is necessary to engage local people in all stages of protection, conservation and management. In addition, the Faro Convention (2005) emphasized that the responsibilities regarding the cultural heritage management must be shared between authorities and civil society to make possible a joint action among different stakeholders. At the site of Bisotun, participatory monitoring means the systematic reporting and recording of the issues regarding the cultural heritage properties provided by people living in the landscape zone in order to achieve sustainable monitoring of cultural heritage properties. In fact, the local people can monitor progress of safeguarding and protecting their heritage by daily observing and reporting relevant issues if any. The main purpose of participatory monitoring in this research is to engage the local community in the monitoring of cultural heritage properties in the Bisotun World Heritage Site.
This article presents a personalized recommendation approach of textual and multimedia resources related to artistic and cultural points of interest (POIs). This approach exploits linked open data to retrieve content related to POIs and social media to personalize their recommendation to the target user. The similarity evaluation between the social user profile and the related material occurs based on the classic doc2vec model. A preliminary comparative analysis conducted on 20 real users showed encouraging experimental results in terms of perceived accuracy and beyond-accuracy metrics.
In the cultural heritage domain, many researchers have proposed personalization techniques and tools based on human and technological factors, aiming to deliver enhanced visitor experiences. During personalization, a vast amount of user information, such as geographical location, cultural background, interests, and preferences, can be used to create user models. However, this information can often be considered private, and thus, privacy issues are raised when using personalized applications. In this position paper, we discuss the importance of privacy in the personalized cultural heritage applications and propose a preliminary privacy preservation model.
Towards Personalized Social Recommendations for Cultural Heritage Activities: Methods and technology to enable cohesive and inclusive recommendations
The aim of the SPICE project is to build social cohesion, both between and within citizen communities, by developing tools and methods to support citizen curation. We define citizen curation as a process in which cultural objects are used as a resource by citizens to develop their own personal interpretations. Within communities, citizens can use their interpretations to build a representation of themselves and their shared perspective on culture. Interpretations can also be used to support social cohesion across groups. In this short position paper we outline the methodologies and technologies needed to be built in order to build a recommender system of cultural objects that will implement these goals of social cohesion and inclusion.
Traditional recommender systems suggest Cultural and Natural Heritage items to visitors by matching the target user to the available options, one-to-one. However, the increasing diffusion of informal activities and events, supported by location-based services such as Airbnb, extends personalized recommendation to a many-to-one match-making task. Airbnb experiences, which any citizen can propose to offer geographic tours and thematic activities, are composed of at least two entities to be evaluated: the former is the experience itself (in terms of topic, cost, etc.); the latter is the host, who directly interacts with guests during the management of the planned activities. As both entities can dramatically influence guests’ perceptions, they should be jointly taken into account by recommender systems. This paper presents our preliminary work aimed at extending the personalized suggestion of Cultural Heritage items to such composite objects.
Adaptive and personalized systems have become pervasive technologies that are gradually playing an increasingly important role in our daily lives. Indeed, we are now used to interact every day with algorithms that help us in several scenarios, ranging from services that suggest us music to be listened to or movies to be watched, to personal assistants able to proactively support us in complex decision-making tasks.
As the importance of such technologies in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the personalization strategy (e.g., recommendation accuracy) at the expense of the explainability and the transparency of the model.
The main research questions which arise from this scenario is simple and straightforward: How can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability?
The workshop aims to provide a forum for discussing such problems, challenges, and innovative research approaches in the area, by investigating the role of transparency and explainability on the recent methodologies for building user models or developing personalized and adaptive systems.
The explanation and justification of recommender systems’ results are challenging research tasks. On the one hand, a model-based description that clarifies the reasoning approach behind the suggestions might be difficult to understand, or it might fail to convince the user, if (s)he does not agree on the applied inference mechanism. On the other hand, an aspect-based justification based on few characteristics might provide a partial view of items or, if more detailed, it might overload the user with too much information.
In order to address these issues, we propose a visual model aimed at justifying recommendations from a holistic perspective. Our model is based on a service-oriented summary of consumers’ experience with items. We use the Service Journey Maps to extract data about the experience with services from online reviews, and to generate a visual summary of such feedback, based on evaluation dimensions that refer to all the stages of service fruition. Thanks to a graphical representation of these dimensions (based on bar graphs), and on the provision of on-demand data about the associated aspects of items, our model enables the user to overview the recommendation list and to quickly identify the subset of results that deserve to be inspected in detail for a final selection decision. A preliminary user study, based on the Apartment Monitoring application, has provided encouraging results about the usefulness and efficacy of our model to enhance user awareness and decision-making in the presence of medium-size recommendation lists.
As eye tracking is becoming feasible on commodity devices, it provides a powerful tool for inferring users’ perceived relevance of objects. Yet the prediction quality depends on multiple parameters that have to be considered when designing the prediction model. In this paper, we review approaches to predict relevance from gaze with regard to five design issues: 1) extracting features, 2) defining the algorithm, 3) setting a prediction scope, 4) eliminating visual distractors, and 5) evaluating the system. The insights may serve as a guide to establish best practices for the design and evaluation of relevance prediction models, thus allowing for better comparability of future work. We further discuss promising fields of application that will drive future research on gaze-based relevance prediction.
Recommender systems usually seek to cater to the preferences of a single user. However, societal issues that involve multiple stakeholders, such as climate change, cannot be mitigated this way. We address this issue by going beyond traditional algorithms, using psychological theories to not only optimize what is recommended (i.e., the algorithm), but also how interface items are presented. We present the ‘Saving Aid’ recommender system for household energy conservation, encouraging users to adopt energy-saving measures with high kWh savings, such as buying environmentally-friendly electronic appliances. In an online user study (N = 258), we compare different interfaces that promote measures with high kWh savings using different framing techniques, presenting either a kWh savings score or a Smart Savings Score that combines effort and kWh savings. We show that framing positively affects the extent to which users consider kWh savings when choosing a measure, without compromising the user’s system evaluation.
An emerging challenge in course recommender systems is explaining to students why they have been recommended particular courses. In the context of a university, it can be valuable for a recommender system to introduce students to courses they may have not otherwise taken but which are still relevant to them. However, there is a tension between these goals as students have less ability to judge the relevance of courses, the less familiar they are with them. In this paper, we explore ways of familiarizing students with recommendations with three types of explanations designed with varying levels of personalization. We conduct a 67 student randomized controlled experiment using two course recommendation engines, content and context-based, and augment recommendations with three different types of explanation. Students rated each course recommended in terms of novelty, unexpectedness, and successfulness (i.e., intent to enroll). We find several statistically significant results, including an increase in serendipity (i.e., unexpectedness + successfulness) when explaining new course recommendations using keywords from courses a student has previously taken.
The 2021 Adaptive and Personalized Persuasive Technology (ADAPPT 2021) workshop is the second edition of the ADAPPT series, which commenced in 2019. It is organized in conjunction with the 29th Association for Computer Machinery (ACM) Conference on User Modeling, Adaptation and Personalization (UMAP). We summarize the main ideas, topics and results of the 10 papers accepted for publication in the adjunct proceedings of the UMAP conference and for video/oral presentation at the virtual workshop.
Although the importance of tailoring persuasive technologies (PTs) has been discussed extensively in the literature, there is insufficient theory-driven guidance on how to employ social psychology theories in the design of PT interventions. In this paper, we provide an overview of the key frameworks in the extant literature for designing information systems, in general, and persuasive systems, in particular. Specifically, we identify their limitations, and propose a new framework called ”EMVE-DeCK Framework” based on the synthesis of the strengths of the existing frameworks. The EMVE-DeCK Framework, which is grounded in Bandura’s Triad of Reciprocal Determinism, comprises seven steps, which include: (1) Explain: Employ “Theory” to explain the target “Behavior” by uncovering the relationship between the “Behavioral Determinants” and the target “Behavior”; (2) Map: Map the significant “Behavioral Determinants” in the “Theory” domain to “Persuasive Strategies” in the “Technology” domain; (3) Validate: Validate the target users’ receptiveness to the “Persuasive Strategies” in the “Technology” domain; (4) Explicate: Employ “Theory” to explicate (explain) the adoption of the proposed persuasive “Technology” by uncovering the relationship between the user experience (UX) “Design Attributes” and the persuasive “Technology Adoption”; (5) Design: Design and implement theory-driven, tailored persuasive “Technology”; (6) Change: Deploy the persuasive “Technology” to change “Behavior” in the field; and (7) Knowledge: Contribute “Findings” to Knowledge. We discuss the framework in the context of PT interventions.
During this current COVID-19 pandemic, video-based behavior modeling has become a popular persuasive technique to motivate exercise performance in most fitness apps. However, there is limited research on the effectiveness of race-based tailoring. To bridge this gap, we investigated users’ social-cognitive-belief profile when observing behavior models performing bodyweight exercises and the moderating effect of race. We based our study on 567 participants (50 blacks and 517 whites) recruited from Canada and United States. We asked the participants to watch a video modeling a bodyweight exercise and answer questions on three social-cognitive beliefs: perceived self-efficacy, perceived self-regulation, and outcome expectation. The results of a three-way repeated-measure analysis of variance showed that there is an interaction between the race of the observer and the race of the behavior model. Among white observers, there is no significant difference between their overall social-cognitive belief when observing white models and that when observing black models. However, among black observers, there is an effect of the race of the behavior model. They have a significantly higher overall social-cognitive belief when observing white models than when observing black models. We discuss our findings in the context of tailoring behavior models to users of different races.
Designing and evaluating a recommendation algorithm are typically user-centric operations. However, users are not the sole party of real-world applications. Therefore, designing, deploying, and evaluating a personalized system should consider all parties (or stakeholders). Dealing with recommender systems as multistakeholders systems is a relatively new research direction. In particular, considering the requirements of multiple stakeholders in selecting the best algorithm among a set of alternatives has not been discussed extensively. An adaptive evaluation approach that can handle personalized needs from all parties is therefore required. This paper aims to fill this gap by introducing the use of goal modeling to support the selection of a recommendation algorithm. Through an illustrative example, we show the feasibility of modeling the recommendation alternatives and their contributions to multiple stakeholders’ goals so that the selected algorithm is well-aligned with the overall system requirements. Accordingly, we say that the goal modeling approach has the potential of helping practitioners and researchers to better reason about algorithms selection and, therefore, advances the development of recommender systems.
Most western countries are currently using more of the earth’s resources than they produce, exhausting the earth and causing permanent damage. This study aims to determine how people can be nudged towards more ecological consumer behavior when using web supermarkets. An experimental website and a survey are created to explore consumers’ willingness to try more ecological alternatives. The results confirm that digital nudges – more specifically, the decoy effect and the middle-option bias – can guide participants towards more ecological meat alternatives, as compared to a website without nudges. We also investigated the influence of price tags by also including scenarios without price information – with and without nudging. Without price information, our participants steered towards more sustainable (organic) alternatives even without nudges; the effect of the nudges was limited in these situations, without price tags, only the middle-option nudge managed to convince participants to take the even more ecological, vegetarian alternative. The combined results lead to the belief that the decision to not opt for organic meat is primarily motivated by price, whereas the decision to go vegetarian (arguably even more sustainable) is not motivated by price, but rather by the effect of the nudges.
As the Internet is becoming a popular source of food recipes, studies show that they also contribute to health problems. A particular problem is that personalized food platforms promote recipes that contain large amounts of fat, sodium, and sugar. Filtering recipes on their health content would be possible, but this might lead to dissatisfaction among users, as unhealthy recipes tend to be popular. As a new contribution, we propose to not change the presented content but how it is presented: the decision context. We present a work-in-progress that aims to steer users of a recipe search portal towards healthier choices through the serial-position effect, which predicts that users are biased towards higher-ranked items. We report a 2x2-within-subjects experiment, in which 17 participants searched for four different recipes and were asked to choose one recipe they liked the most from each list. Search results were either presented from high (low) to low (sodium) content and were personalized or not. Our analysis suggested that matching the content towards the user’s eating goals might overcome the effectiveness of serial-position nudges, while we found no average differences in user satisfaction levels across the two ranking scenarios.
Persuasive apps aim to contribute to the behavior change process of individuals. Some of these apps include personalization strategies that contribute to more effective changes in their behavior. Gender-role differences are an aspect that can be further explored to improve app tailoring. The Self-Determination Theory serves as a theoretical guide to design personalized persuasive apps that support gender-role differences. This position paper sparkles the discussion around design ideas that might support these gender-role differences.
Persuasiveness of a Game to Promote the Adoption of COVID-19 Precautionary Measures and the Moderating Effect of Gender
Persuasive games are widely being implemented in the healthcare domain to drive behaviour change among individuals. Previous research has shown that there are differences in how males and females respond to persuasive attempts. However, there is little knowledge on whether gender moderates the effectiveness of persuasive games for health, specifically, games for promoting the adoption of COVID-19 precautionary measures. To address this gap, we designed COVID Pacman-R – a persuasive game to promote the adoption of COVID-19 precautionary measures. This paper presents the design and evaluation of COVID Pacman-R to examine its overall perceived persuasiveness as well as gender differences in persuasiveness to establish whether there is a need to tailor the game to various gender groups. Study results (N=131) revealed that the game is perceived as highly persuasive overall as well as by the different gender groups. The findings also revealed that there are no significant differences in the persuasiveness across gender groups with respect to COVID-19 ability to motivate them to adopt the precautionary measures.
A Health Belief Model Approach to Evaluating Maternal Health Behaviors among Africans - Design Implications for Personalized Persuasive Technologies
We investigate the health belief and its impact on maternal health behaviors among African women with a view to offering guidelines for designing personalized persuasive technologies to encourage appropriate maternal health and lifestyles amongst women in the Global South. The results from the analysis of our qualitative data uncover key themes from the interview responses (of 47 stakeholders) associated with our research questions. They are: monitoring of fetus development and ensuring health and wellbeing of expectant mother and unborn baby, inaccessibility of the health facilities and long waiting times (queues) at the health facilities, poverty, perception about Caesarian Session (CS), ignorance of the consequences of inappropriate health behaviors, and low tech skills. Others include traditional and religious beliefs and constant health crises during pregnancies, delivery of newborn babies with serious health problems and deformities; and increase in pregnancy and childbirth-related deaths. Specifically, we uncovered that socio-cultural factors may or may not support appropriate maternal health behaviors. The socio-cultural factors include ethnic practices, traditional beliefs, and religious inclinations of the pregnant mothers and their families. For instance, the belief that “safe delivery is only a function of belief in God”, intake of untested concoctions, and stigmatization due to inability to deliver her baby by herself at home and/or submission to Caesarian Session (CS), are unsupportive religious and traditional beliefs and practices that promote inappropriate maternal health behaviors amongst the people. Subsequently, the themes were categorized based on their associated health belief factors such as perceived benefits, perceived barriers, perceived threat, and socio-cultural factors. We mapped them to their corresponding persuasive design strategies. Finally and based on the outcome of our studies, we propose culturally appropriate guidelines for developing personalized persuasive technologies for maternal health, which will be potentially effective in motivating healthy behaviors and lifestyles amongst expectant and nursing mothers across African communities.
Gender and the Effectiveness of a Persuasive Game for Disease Awareness Targeted at the African Audience
Persuasive gamified systems have been effective at motivating behaviour change in various domains of life and for various user types. However, according to research, various user characteristics can impact the effectiveness of these systems. One of these user characteristics is user gender. In this paper, we examine the effectiveness of persuasive strategies implemented in a disease awareness persuasive game, titled “COVID Dodge”, for promoting COVID-19 awareness and safety precautions among the African audience and the moderating gender effects. In an in-the-wild study of 51 participants, we reveal that all the persuasive strategies implemented were highly effective overall and for each gender, with the Verifiability strategy being the most effective and the Reminders strategy as the least effective. We also uncovered that while all the persuasive strategies effectively motivated all genders, there was no significant difference in the effectiveness of any of the strategies across genders. We conclude with a discussion on implementing some persuasive strategies for African audiences.
Level of Involvement and the Influence of Persuasive Strategies in E-commerce: A Game-Based Approach
Research has shown that persuasive strategies are more effective in bringing about a change in attitude or behavior when they are tailored to individuals or groups of similar individuals. Several domains such as exercise and health domains use the demographic data of users to tailor influence strategies such as their age, gender, and culture. However, in domains such as e-commerce where the users’ demographic data is unknown, there is a need to identify other factors that can be used to tailor persuasive strategies. To contribute to research in this area, this work-in-progress paper investigates the use of shoppers’ level of involvement in the shopping process as a potential factor for tailoring persuasive strategies in e-commerce. We present preliminary results from a game-based study that compares the response to Cialdini's persuasive strategies for people with high and low levels of involvement. Our results suggest that people with high levels of involvement in the shopping process are influenced differently from those with low level of involvement, making level of involvement a potential trait that can be used in tailoring persuasive strategies in e-commerce. The shoppers who are highly involved in the shopping process responded to more authority messages compared to other strategies, while those with low level of involvement responded to more commitment messages than other strategies. Also, the highly involved shoppers shopped for healthier foods for themselves and a child while they shopped the least healthy for a significant other while the low involved shoppers shopped healthier for their significant other and less healthy for themselves.
Insurance, algorithmic decision-making, and discrimination (INSURANCE 2021)
Insurance companies could use algorithmic systems to set premiums for individual consumers or deny them insurance. More and more data become available for insurers for risk differentiation. For example, some insurers monitor people's driving behaviour to estimate risks. To some extent, risk differentiation is necessary for insurance. And it could be considered fair when, e.g., high-risk drivers pay more.
But there are drawbacks. Algorithmic decision-making could lead, unintentionally, to discrimination on the basis of, for instance, ethnicity or gender. Too much personalized risk differentiation could also make insurance unaffordable for some people. Furthermore, risk differentiation might result in the poor paying more, thereby worsening economic inequality.
We address these topics with a three-part workshop:
-Part 1: Panel (90min)
-Part 2: Break-out groups (60min)
-Part 3: Presentations (60min)
The Third International Workshop on Adaptive and Personalized Privacy and Security (APPS 2021) aims to bring together researchers and practitioners working on diverse topics related to understanding and improving the usability of privacy and security software and systems, by applying user modeling, adaptation and personalization principles. Our special focus in 2021 is on challenges and opportunities related to the Covid-19 outbreak, more specifically on ensuring security and privacy of sensitive data and secure user interactions in online systems. The third edition of the workshop includes interdisciplinary contributions from Belgium, Cyprus, Germany, Greece, Portugal, the Netherlands, and United Kingdom, that introduce new and disruptive ideas, suggest novel solutions, and present research results about various aspects (theory, applications, tools) for bringing user modeling, adaptation and personalization principles into privacy and systems security. This summary gives a brief overview of APPS 2021, held online in conjunction with the 29th ACM Conference on User Modeling, Adaptation and Personalization (ACM UMAP 2021).
Contemporary legislative and scientific trends stress the importance of control as an instrument to manage informational privacy. Still, privacy decision-making remains far from optimal as there is a discrepancy between privacy attitudes and privacy behaviours. To interpret this gap, this study builds upon the systematic biases found in behavioural economics theory, more specifically the illusion of control. This study examines the effects of the illusion of control through stimulus familiarity on privacy behaviour. More specifically, we compared the participants’ willingness to provide personal data between a very familiar web store and a web store unknown to them. The results from a sample of 171 students in the Netherlands indicate that, even though the level of perceived control and the amount of data disclosure are higher in the familiar condition, stimulus familiarity does not induce an illusion of control in privacy trade-offs. Moreover, this relationship is slightly weaker for sensitive disclosure. However, this study did find evidence of gender differences in sensitive disclosure: in this sample, women disclosed significantly less sensitive information than men, possibly due to risk-aversion.
A phishing Mitigation Solution using Human Behaviour and Emotions that Influence the Success of Phishing Attacks
Phishing is a social engineering scam that can cause financial and reputational damage to people and organisations. Studies have demonstrated the effects of human behaviour and emotions on people's security behaviour, such as falling into a phishing scam. Moreover, several studies show the effects of the COVID-19 outbreak on human emotions, impacting phishing attempts' success. In this study, we have developed a solution using previous studies' results to identify vulnerable users (i.e., those at risk of clicking on phishing links) in organisations. The solution assigns proper mitigation actions to those high-risk users. The system contains behaviour measurement, risk scoring, and mitigation modules that can mature and develop accuracy over time. Furthermore, situations similar to a pandemic are considered in the solution. The proposed solution will help organisations focus more on protecting high-risk users and reducing successful phishing attacks. This solution should be used in combination with technical anti-phishing and cybersecurity awareness training campaigns to achieve better results.
With data breaches on the rise especially after a Covid pandemic, a huge challenge is to design secure platforms for sensitive data sharing and to support vital decisions for both healthcare provision and enhanced personalised patient care. Recently proposed is the design of a patient-centric tool chain to integrate cross-border medical records. The aim is to demonstrate how emerging technologies for authentication, authorisation, and big data storage can converge in a healthcare platform to enable citizens (and researchers) to securely retrieve vital patient health information whilst aligned with data protection regulations and standards. We develop an initial risk model with four common threat scenarios, discussing risk factors such as threat, vulnerability, impact, and likelihood. We detail how the healthcare platform design can mitigate the underlying vulnerabilities with countermeasures that do not compromise the data sharing process transparency and trust for users.
A Comparative Study among Different Computer Vision Algorithms for Assisting Users in Picture Password Composition
Picture gesture authentication (PGA), utilized by millions of users worldwide, is a cued-recall graphical authentication system which requires users to select an image and subsequently draw gestures on that image to create their picture password. A crucial component for enhancing the security of PGA-like schemes is the accurate quantification of the user-chosen passwords through a picture password strength meter. Despite the huge adoption of PGA worldwide, there is rather limited knowledge on the implementation aspects of an accurate picture password strength meter that would assist users in creating secure picture passwords. In this paper, we present the implementation and evaluation of an assistive picture password strength meter system within PGA-like schemes, which is based on image analysis through computer vision techniques. Results of the evaluation study (n=34) revealed that different computer vision approaches perform different across various datasets used during training. These findings could drive the design of intelligent security mechanisms for quantifying the strength of the user-chosen passwords, and ultimately assist end-users towards making better picture password selections by providing feedback about the strength of their passwords, as well as assist service providers in terms of integration of assistive security mechanisms.
The COVID pandemic made it challenging for usable security and privacy researchers around the globe to run experiments involving human subjects, specifically in cases where such experiments are conducted in controlled lab setting. Examples include but are not limited to (a) observing and collecting data on user behavior with the goal of (b) informing the design and (c) engineering novel concepts based on adaptation and personalization as well as (d) evaluating such concepts regarding user performance and robustness against different threat models. In this keynote I will set out with providing a brief introduction to and examples on our research on behavioral biometrics. I will then discuss how the current situation influences research requiring close work with human subjects in lab settings and outline approaches to address emerging issues. Finally, I will provide some examples of out-of-the-lab research and reflect on both challenges and opportunities of these approaches.
Attribute-based encryption (ABE) schemes and their variations are often applied to preserve the privacy of data. In particular, ABE schemes proposals are resilient to multiple attacks, including attacks in interception, interruption, modification, fabrication, unauthorized authentication, and access of data. Existing proposals have several limitations, such as the generation, verification, and distribution of digital certificates incur extra computation and communication overhead which are not suitable for resource-constrained computing. Furthermore, in most of the ABE schemes, a certification authority (CA) generates the public/secret keys according to a set of attributes. However, the compromise of CA can endanger the secret keys, therefore, the secrecy of encrypted messages. Some of the existing ABE schemes are based on bilinear pairing that requires large security parameters, which make ABE schemes unsuitable for resource-constrained computing devices.
The current ABE proposals [1, 2, 3, 4] are complex because they require implementing large-number security parameters (i.e., 2048-bit or 4096-bit size) to achieve 2128 security. Besides that, those ABE schemes consider a CA with an active role in the application process. The CA generates and distributes secret keys to devices or users. Nonetheless, sharing private attributes with the CA can risk data and user privacy, since the CA can also decrypt messages, depending on the application scenario, and retrieve the data. Moreover, the compromise of CA poses a risk to the communication secrecy between the sender and the receiver. In addition, some studies propose symmetric key schemes for resource-constrained devices. However, in large-scale networked systems, the symmetric key management becomes very complex and inefficient. The symmetric-key deployment often requires a separate protocol for session key agreement and generation. In IoT networks where mostly short-sized data is exchanged, symmetric key encryption schemes are often subject to ciphertext-only attacks.
In this paper, we will discuss how we can generate efficient ABE schemes based on elliptic curve cryptographic (ECC) techniques without the use of a CA. ECC-based ABE schemes require smaller-number security parameters (i.e., 256 or 512 bits) for achieving at least 2128 computational security that makes them efficient in resource-constrained computing devices. In particular, efficient ABE schemes, such as , are based on the computational Diffie-Hellman assumptions and their derivatives. Using such an assumption, we can perform elliptic curve operations, addition and multiplication, in a group without compromising the security of the ABE scheme. In other words, an adversary or oracle cannot efficiently “guess” or “find” the secret asymmetric key.
ABE schemes are usually enclosed by assumptions that are necessary for the deployed systems. In addition, ABE schemes consist of the Key pair generation, encryption, and decryption algorithms, even though few ABE proposals additionally include key pair update and key pair revocation algorithms . The key pair generation algorithm considers the secret attributes of a device or user. The algorithm takes as input a security parameter λ and a set of secret attributes AS. λ consists of a long string of 1s in a chosen finite field that defines the access structure and the length of the secret keys and messages. It outputs the public/secret key pair (PK, SK) that is either offline or online and is distributed to the involved entities, i.e., devices or users.
The encryption algorithm takes as input the public key of a device PK, the access policy P, and a message M. It outputs a ciphertext CT that is exchanged in a hostile environment. On the contrary, the decryption algorithm takes as input the secret key SK and ciphertext CT and outputs the plaintext message M. Obviously, the key pair generation algorithm sets the mathematical foundations that connect the key pair, whereas encryption and decryption algorithms conceal and reveal the actual data during online transmission and exposure.
If the secret or shared attributes of an entity are changed, for any reason, then all the keys should be updated. The key pair update algorithm will regenerate the public/secret (PK/SK) key pair. In the key pair update procedure, the updated secret attributes AS are considered as input to the algorithm and new key pair (PK ́/ SK ́) is regenerated. In the key pair revocation algorithm, the keys are revoked by an entity or due to the malicious behaviour of some users/devices. Three cases for key revocation have been identified:
(1) Legitimate revoke: in this case, the key pair can be revoked due to a system update, expiration date, and scheduled maintenance of the networked system. (2) Malicious activity: in this case, the key revocation may take place due to the malicious behaviour which might be observed and/or reported by an entity of the networked system. (3) Attribute update: in this case, the change in the attribute set can trigger a new key revocation procedure.
In literature, the ABE algorithms are theoretically assessed by proving that the mathematical foundations hold in a malicious environment. Likewise, the ABE algorithms are assessed by their computational and memory complexity as well as by their practical implementation in real devices or the simulated network. In principle, the security of the ABE schemes is carefully analysed to ensure security against popular attacks: (i) Computing the secret key SK from the public key PK, (ii) Computing the secret key SK from multiple ciphertexts (i.e., chosen ciphertext attack), (iii) it can be shown by a reduction that the computational problem in an ABE scheme is at least as hard as the discrete logarithm problem (DLP) and (iv) the ABE scheme is secure against an adversary A with knowledge of the shared attribute set AK for deriving the secret key SK by a collision attack.
The security and privacy challenges posed by resource-constrained systems affect the heterogeneous nature of devices with varying degrees of computation and storage capacity. It is therefore essential to find lightweight solutions which eliminate the need for applying different security schemes per system. The existing public-key encryption and attribute-based encryption schemes are often computationally expensive, therefore, not suitable for resource-constrained devices. Moreover, the sharing of attributes with a certification authority risks the privacy of devices if CA has been compromised. New ABE schemes should not endanger the secrecy of messages among devices if CA is compromised.
It is also essential for researchers to propose schemes based on mathematical constructions that are proven to be secure and light, such as elliptic curve cryptography which supports smaller key sizes and is highly suitable for resource-constrained devices.
Biometric technologies are being considered lately for student identity management in Higher Education Institutions, as they provide several advantages over the traditional knowledge-based and token-based authentication methods, i.e., biometrics provide high security entropies, convenience and a sense of technological modernity to the end-users. While biometric technologies have many benefits from both a security and usability point of view, still there is a need for innovative user identity management solutions that continuously identify and authenticate students during academic and teaching activities. In addition, biometrics entail several threats and weaknesses with regards to the privacy of data stored about the user, which negatively affect the user acceptance and the wider adoption of biometrics due to regulatory and legal issues. In this paper, we refer to our ongoing research on intelligent and continuous online student identity management for improving security and trust in European Higher Education Institutions. We further highlight based on the literature, existing challenges, threats and state-of-the-art approaches with regards to preserving the privacy of biometric-driven data.
The Personalized Intelligent Conversational Agents workshop focuses on both long-term engaging spoken dialogue systems and text-based chatbots, as well as conversational recommender systems. The goal of the workshop is to stimulate discussion around problems, challenges, possible solutions and research directions regarding the exploitation of natural language processing and machine learning techniques to learn user features and to use them to personalize the dialogue in the next generation of intelligent conversational agents.
Chatbots in the tourism industry: the effects of communication style and brand familiarity on social presence and brand attitude
Text-based chatbots are increasingly being implemented in the tourism sector to supplement online customer service encounters. However, customers often perceive conversations with chatbots as unnatural and impersonal. Therefore, we investigated whether a humanlike communication style enhances users’ chatbot and brand perceptions. Two experiments were conducted in which the effects of informal language (vs. formal language) and invitational rhetoric (present vs. absent) were examined separately. In both experiments, participants engaged in conversations with a customer service chatbot in the tourism sector after which they evaluated social presence and attitude towards the brand. Also, brand familiarity was included as a factor in both experiments as users’ brand familiarity affects their perceptions of the communication style in human-to-human interaction. The results showed chatbots using informal language or invitational rhetoric increase one's brand attitude via social presence. Moreover, brand familiarity only moderated the findings when the chatbot used invitational rhetoric: participants who were familiar with the brand experienced more social presence when the chatbot messages contained invitational rhetoric. We conclude that the perceived humanness of chatbots can be increased by adopting a communication style consisting of informal language and invitational rhetoric. Implications for the design and evaluation of chatbot messages are discussed.
Interacting with chatbots has become ubiquitous nowadays. Nevertheless, conversational agents often remain unable to reliably succeed in social contexts, which negatively influences users’ experience and prevents them from exploiting the technology's full potential. To improve user experience and subsequent trust-formation in chatbots only very little attention has been paid to the active involvement of the user and with that to customization options. Employing a preregistered experimental 1x2 between-subjects study design (N = 171) this study explores an alternative approach to the typical one-chatbot-fits-all solution and investigates the potential of active user-based chatbot customization for the development of trust in chatbots. While customization had no direct effect on trust, anthropomorphism was identified as a significant mediator. The chatbot's interpersonal communicational competence was not affected by customization, yet it did predict trust. Exploratory analyses of participants’ feedback point towards the importance of individual differences between users and generally show a positive impact of customization on the overall chatbot experience.
Not Directly Stated, Not Explicitly Stored:: Conversational Agents and the Privacy Threat of Implicit Information
As conversational agents continue to evolve, it will become increasingly common to interact with search engines and recommender systems via natural language dialogue. Such interactions guide and shape our decision making, especially our consumption of products and services. The evolution of conversational agents will bring new challenges in protecting the privacy of users and research has already begun to identify and address potential threats. Current research, however, focuses on how conversational agents acquire and process explicit information. In this paper, we consider the future and bring to light the up-and-coming privacy risks posed by implicit information. Our first point is that meaning that is expressed implicitly is an integral part of natural language, implying that agents that have the ability to engage in a fully humanlike dialogue will also have the ability to manipulate implied meaning. As a result, such agents will be capable of acquiring sensitive information about users that is not directly stated. Users have little awareness of or control over information that is implicitly communicated. Our second point is that in today's search and recommender systems user profiles are not explicitly stored. As a result, it is not obvious that a user is being targeted on the basis of implicit person-specific information. The way forward, we argue, is for research in the area of conversational agents to devote more attention to the linguistic principles that underlie implied meaning and the legal means that are available to protect users.
Diet coaching is a behaviour change task which requires lots of interaction with patients. E-health apps gathered lots of interest in research and, recently, chatbots have been leveraged to address this task, with a focus on persuasion to motivate people towards behaviour change. In this paper, we take a look at current approaches in building persuasive dieting chatbots and expose a number of major unsolved challenges. We motivate them with evidence from previous work and show that current chatbots don’t approach certain scenarios properly, hence limiting their communication and persuasion capabilities.
Measures of algorithmic fairness often do not account for human perceptions of fairness that can substantially vary between different sociodemographics and stakeholders. The FairCeptron framework is an approach for studying perceptions of fairness in algorithmic decision making such as in ranking or classification. It supports (i) studying human perceptions of fairness and (ii) comparing these human perceptions with measures of algorithmic fairness. The framework includes fairness scenario generation, fairness perception elicitation and fairness perception analysis. We demonstrate the FairCeptron framework by applying it to a hypothetical university admission context where we collect human perceptions of fairness in the presence of minorities. An implementation of the FairCeptron framework is openly available, and it can easily be adapted to study perceptions of algorithmic fairness in other application contexts. We hope our work paves the way towards elevating the role of studies of human fairness perceptions in the process of designing algorithmic decision making systems.
Diversity-aware Recommendations for Social Justice? Exploring User Diversity and Fairness in Recommender Systems
Diversity and fairness are increasingly linked in the field of personalized recommendations. For instance, the diversification of items (”item diversity”) is considered key to fairness. Less attention has been paid to ”user diversity” and its implications for fairness. In this paper, I problematize the conceptualization and application of user diversity in recommender systems. I argue that the widespread understanding of user diversity as natural, value-neutral, and individual-level categories may accidentally compound historical injustice. To mitigate emerging biases, diversity dimensions need to be contextualized by mapping structural inequalities between users. The paper thus stresses the importance of paying attention to the structural context of diversity, whereas the context refers to political and social circumstances surrounding the user’s life. The paper makes three contributions: 1) It connects fairness to diversity literature in the field of recommender system, 2) it specifies the tension between item-side and user-side fairness by revealing a bias in the treatment of user diversity, 3) it proposes solutions to mitigate the bias by drawing on Black feminist and critical race theory.
Towards Continuous Automatic Audits of Social Media Adaptive Behavior and its Role in Misinformation Spreading
In this paper, we argue for continuous and automatic auditing of social media adaptive behavior and outline its key characteristics and challenges. We are motivated by the spread of online misinformation, which has recently been fueled by opaque recommendations on social media platforms. Although many platforms have declared to take steps against the spread of misinformation, the effectiveness of such measures must be assessed independently. To this end, independent organizations and researchers carry out audits to quantitatively assess platform recommendation behavior and its effects (e.g., filter bubble creation tendencies). The audits are typically based on agents simulating the user behavior and collecting platform reactions (e.g., recommended items). The downside of such auditing is the cost related to the interpretation of collected data (here, some auditors are advancing automatic annotation). Furthermore, social media platforms are dynamic and ever-changing (algorithms change, concepts drift, new content appears). Therefore, audits need to be performed continuously. This further increases the need for automated data annotation. Regarding the data annotation, we argue for the application of weak supervision, semi-supervised learning, and human-in-the-loop techniques.
We are living in an era of global digital platforms, eco-systems of algorithmic processes that serve users worldwide. However, the increasing exposure to diversity online – of information and users – has led to important considerations of bias. A given platform, such as the Google search engine, may demonstrate behaviors that deviate from what users expect, or what they consider fair, relative to their own context and experiences. In this exploratory work, we put forward the notion of transparency paths, a process by which we document our position, choices, and perceptions when developing and/or using algorithmic platforms. We conducted a self-reflection exercise with seven researchers, who collected and analyzed two sets of images; one depicting an everyday activity, “washing hands,” and a second depicting the concept of “home.” Participants had to document their process and choices, and in the end, compare their work to others. Finally, participants were asked to reflect on the definitions of bias and diversity. The exercise revealed the range of perspectives and approaches taken, underscoring the need for future work that will refine the transparency paths methodology.
Over the past years, there has been an increasing concern regarding the risk of bias and discrimination in algorithmic systems, which received significant attention amongst the research communities. To ensure the system’s fairness, various methods and techniques have been developed to assess and mitigate potential biases. Such methods, also known as “Formal Fairness”, look at various aspects of the system’s advanced reasoning mechanism and outcomes, with techniques ranging from local explanations (at feature level) to visual explanations (saliency maps). Another aspect, equally important, represents the perception of the users regarding the system’s fairness. Despite a decision system being provably “Fair”, if the users find it difficult to understand how the decisions were made, they will refrain from trusting, accepting, and ultimately using the system altogether. This raised the issue of “Perceived Fairness” which looks at means to reassure users of a system’s trustworthiness. In that sense, providing users with some form of explanation on why and how certain outcomes resulted, is highly relevant, especially nowadays as the reasoning mechanisms increase in complexity and computational power. Recent studies suggest a plethora of explanation types. The current work aims to review the recent progress in explaining systems’ reasoning and outcome, categorize and present it as a reference for the state-of-the-art fairness-related explanations review.