The paper titled as “Affect State Recognition for Adaptive Human Robot Interaction in Learning Environments” (D. Antonaras, C. Pavlidis, N. Vretos, P. Daras) has been accepted at the 12th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP), which took place in Bratislava, Slovakia from 9 to 10 July 2017.
The paper titled as “Modelling Learning Experiences in adaptive multi-agent learning environments” (D. Tsatsou, N. Vretos, P. Daras) has been accepted at the Proceedings of the 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), which took place in Athens, Greece from 6 to 8 September 2017.
The paper titled as “Adaptive learning based on affect sensing” (D. Tsatsou, A. Pomazanskyi, E. Hortal, E. Spyrou, H. Leligou, S. Asteriadis, N. Vretos, P. Daras) has been accepted at the International Conference on Artificial Intelligence in Education, which will take place in London, UK from 27 to 30 August 2018
The paper titled “ High-performance and Lightweight Real-time Deep Face Emotion Recognition ” (co-authored by Justus Schwan, Esam Ghaleb, Enrique Hortal and Stylianos Asteriadis) has been accepted for publication at the SMAP2017 – 12th International Workshop on Semantic and Social Media Adaptation and Personalization. This paper will be included as part of the Special Session on Multimodal affective analysis for human-machine interfaces and learning environments.
Co-authored by C. Athanasiadis, C.Z. Lens, D. Koutsoukos, E. Hortal and S. Asteriadis, the paper entitled "Personalized, affect and performance-driven Computer-based Learning" has been accepted for the 9th International Conference on Computer Supported Education (CSEDU 2017), which will take place in Porto (Portugal) from the 21st to the 23rd of April, 2017.
Emotion recognition plays an important role in several applications, such as human computer interaction and understanding afective state of users in certain tasks, e.g., within a learning process, monitoring of elderly, interactive entertainment etc. It may be based upon several modalities, e.g., by analyzing facial expressions and/or speech, using electroencephalograms, electrocardiograms etc. In certain applications the only available modality is the user's (speaker's) voice.