Font size: A A A

Aims and Scope

Journal Issues

Editorial Team

Submitting

Reviewing

Policies

Volume 5, Issue 4

 

Regular Articles

  • Adaptive Real-Time Emotion Recognition from Body Movements

    By Weiyi Wang, Valentin Enescu, and Hichem Sahli

    We propose a real-time system that continuously recognizes emotions from body movements. The combined low-level 3D postural features and high-level kinematic and geometrical features are fed to a Random Forests classifier through summarization (statistical values) or aggregation (bag of features). In order to improve the generalization capability and the robustness of the system, a novel semi-supervised adaptive algorithm is built on top of the conventional Random Forests classifier. The MoCap UCLIC affective gesture database (labeled with four emotions) was used to train the Random Forests classifier, which led to an overall recognition rate of 78% using a 10-fold cross-validation. Subsequently, the trained classifier was used in a stream-based semi-supervised Adaptive Random Forests method for continuous unlabeled Kinect data classification. The very low update cost of our adaptive classifier makes it highly suitable for data stream applications. Tests performed on the publicly available emotion datasets (body gestures and facial expressions) indicate that our new classifier outperforms existing algorithms for data streams in terms of accuracy and computational costs.

  • The MovieLens Datasets: History and Context

    By F. M. Harper and Joseph A. Konstan

    The MovieLens datasets are widely used in education, research, and industry. They are downloaded hundreds of thousands of times each year, reflecting their use in popular press programming books, traditional and online courses, and software. These datasets are a product of member activity in the MovieLens movie recommendation system, an active research platform that has hosted many experiments since its launch in 1997. This paper documents the history of MovieLens and the MovieLens datasets. We include a discussion of lessons learned from running a long-standing, live research platform from the perspective of a research organization. We document best practices and limitations of using the MovieLens datasets in new research.

  • A Process for Systematic Development of Symbolic Models for Activity Recognition

    By Kristina Yordanova and Thomas Kirste

    Several emerging approaches to activity recognition combine symbolic representation of user actions with probabilistic elements for reasoning under uncertainty. These approaches provide promising results in terms of recognition performance, coping with the uncertainty of observations, and model size explosion when complex problems are modelled. But experience has shown that it is not always intuitive to model even seemingly simple problems. To date, there are no guidelines for developing such models. To address this problem, in this work we present a development process for building symbolic models that is based on experience acquired so far as well as on existing engineering and data analysis workflows. The proposed process is a first attempt at providing structured guidelines and practices for designing, modelling, and evaluating human behaviour in the form of symbolic models for activity recognition. As an illustration of the process, a simple example from the office domain was developed. The process was evaluated in a comparative study of an intuitive process and the proposed process. The results showed a significant improvement over the intuitive process. Furthermore, the study participants reported greater ease of use and perceived effectiveness when following the proposed process. To evaluate the applicability of the process to more complex activity recognition problems, it was applied to a problem from the kitchen domain. The results showed that following the proposed process yielded an average accuracy of 78%. The developed model outperformed state-of-the-art methods applied to the same dataset in previous work, and it performed comparably to a symbolic model developed by a model expert without following the proposed development process.

 

Special issue on New Directions in Eye Gaze for Interactive Intelligent Systems (Part 1 of 2)

(Special issue editors: Yukiko Nakano, Roman Bednarik, Hung-Hsuan Huang, and Kristiina Jokinen)

  • “I’ll Be There Next”: A Multiplex Care Robot System That Conveys Service Order Using Gaze Gestures

    By Keiichi Yamazaki, Akiko Yamazaki, Keiko Ikeda, Chen Liu, Mihoko Fukushima, Yoshinori Kobayashi, and Yoshinori Kuno

    In an ethnographic study at an elderly care center, we observed two different functions of human gaze being utilized to signal the order of service (i.e., who gets served first and who is next). In one case, when an elderly person requested assistance, the gaze of the care worker signaled that he/she would serve them next. In the other case, the gaze conveyed a request for the service seeker to wait while the care worker finished attending to the current elderly person. Each function of the gaze depended on both the current engagement and the other behaviors of the care worker. We sought to integrate these findings into the development of a robot that might function more effectively in human-robot interaction settings in which multiple parties are involved. We focused on the multiple functions of gaze and bodily actions and implemented them in our robot. Three experiments were conducted, whose findings reveal multiple effects of the robot s gaze and gestures in combination. We conclude that utilization of gaze can play an important role in the design and development of robots that can interact effectively in human-robot interaction settings involving multiple parties.

  • Generating Robot Gaze on the Basis of Participation Roles and Dominance Estimation in Multiparty Interaction

    By Yukiko I. Nakano, Takashi Yoshino, Misato Yatsushiro, and Yutaka Takase

    Gaze is an important nonverbal feedback signal in multiparty face-to-face conversations. It is well known that gaze behaviors differ depending on participation role: speaker, addressee, or side participant. In this study, we focus on dominance as another factor that affects gaze. First, we conducted and analyzed the results of an empirical study that showed how gaze behaviors are affected by both dominance and participation roles. Then, using speech and gaze information that was statistically significant for distinguishing the more dominant and less dominant person in an empirical study, we established a regression-based model for estimating conversational dominance. On the basis of the model, we implemented a dominance estimation mechanism that processes online speech and head direction data. Then we applied our findings to human-robot interaction. To design robot gaze behaviors, we analyzed gaze transitions with respect to participation roles and dominance and implemented gaze transition models as robot gaze behavior generation rules. Finally, we evaluated a humanoid robot that has dominance estimation functionality and determines its gaze based on the gaze models, and we found that dominant participants had a better impression of less dominant robot gaze behaviors. This suggests that a robot using our gaze models was preferred to a robot that was simply looking at the speaker. We have demonstrated the importance of considering dominance in human-robot multiparty interaction.