AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Dang, B.; Huynh, L.; Gul, F.; Rosé, C.; Järvelä, S.; Nguyen, A.
Human–AI collaborative learning in mixed reality: Examining the cognitive and socio-emotional interactions Journal Article
In: British Journal of Educational Technology, vol. 56, no. 5, pp. 2078–2101, 2025, ISSN: 00071013 (ISSN); 14678535 (ISSN), (Publisher: John Wiley and Sons Inc).
Abstract | Links | BibTeX | Tags: Artificial intelligence agent, Collaborative learning, Educational robots, Embodied agent, Emotional intelligence, Emotional interactions, Generative adversarial networks, generative artificial intelligence, Hierarchical clustering, Human–AI collaboration, Interaction pattern, Mixed reality, ordered network analysis, Ordered network analyze, Social behavior, Social interactions, Social psychology, Students, Supervised learning, Teaching
@article{dang_humanai_2025,
title = {Human–AI collaborative learning in mixed reality: Examining the cognitive and socio-emotional interactions},
author = {B. Dang and L. Huynh and F. Gul and C. Rosé and S. Järvelä and A. Nguyen},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007896240&doi=10.1111%2Fbjet.13607&partnerID=40&md5=1c80a5bfe5917e7a9b14ee5809da232f},
doi = {10.1111/bjet.13607},
issn = {00071013 (ISSN); 14678535 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {British Journal of Educational Technology},
volume = {56},
number = {5},
pages = {2078–2101},
abstract = {The rise of generative artificial intelligence (GAI), especially with multimodal large language models like GPT-4o, sparked transformative potential and challenges for learning and teaching. With potential as a cognitive offloading tool, GAI can enable learners to focus on higher-order thinking and creativity. Yet, this also raises questions about integration into traditional education due to the limited research on learners' interactions with GAI. Some studies with GAI focus on text-based human–AI interactions, while research on embodied GAI in immersive environments like mixed reality (MR) remains unexplored. To address this, this study investigates interaction dynamics between learners and embodied GAI agents in MR, examining cognitive and socio-emotional interactions during collaborative learning. We investigated the paired interactive patterns between a student and an embodied GAI agent in MR, based on data from 26 higher education students with 1317 recorded activities. Data were analysed using a multi-layered learning analytics approach, including quantitative content analysis, sequence analysis via hierarchical clustering and pattern analysis through ordered network analysis (ONA). Our findings identified two interaction patterns: type (1) AI-led Supported Exploratory Questioning (AISQ) and type (2) Learner-Initiated Inquiry (LII) group. Despite their distinction in characteristic, both types demonstrated comparable levels of socio-emotional engagement and exhibited meaningful cognitive engagement, surpassing the superficial content reproduction that can be observed in interactions with GPT models. This study contributes to the human–AI collaboration and learning studies, extending understanding to learning in MR environments and highlighting implications for designing AI-based educational tools. Practitioner notes What is already known about this topic Socio-emotional interactions are fundamental to cognitive processes and play a critical role in collaborative learning. Generative artificial intelligence (GAI) holds transformative potential for education but raises questions about how learners interact with such technology. Most existing research focuses on text-based interactions with GAI; there is limited empirical evidence on how embodied GAI agents within immersive environments like Mixed Reality (MR) influence the cognitive and socio-emotional interactions for learning and regulation. What this paper adds Provides first empirical insights into cognitive and socio-emotional interaction patterns between learners and embodied GAI agents in MR environments. Identifies two distinct interaction patterns: AISQ type (structured, guided, supportive) and LII type (inquiry-driven, exploratory, engaging), demonstrating how these patterns influence collaborative learning dynamics. Shows that both interaction types facilitate meaningful cognitive engagement, moving beyond superficial content reproduction commonly associated with GAI interactions. Implications for practice and/or policy Insights from the identified interaction patterns can inform the design of teaching strategies that effectively integrate embodied GAI agents to enhance both cognitive and socio-emotional engagement. Findings can guide the development of AI-based educational tools that capitalise on the capabilities of embodied GAI agents, supporting a balance between structured guidance and exploratory learning. Highlights the need for ethical considerations in adopting embodied GAI agents, particularly regarding the human-like realism of these agents and potential impacts on learner dependency and interaction norms. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: John Wiley and Sons Inc},
keywords = {Artificial intelligence agent, Collaborative learning, Educational robots, Embodied agent, Emotional intelligence, Emotional interactions, Generative adversarial networks, generative artificial intelligence, Hierarchical clustering, Human–AI collaboration, Interaction pattern, Mixed reality, ordered network analysis, Ordered network analyze, Social behavior, Social interactions, Social psychology, Students, Supervised learning, Teaching},
pubstate = {published},
tppubtype = {article}
}
2024
Zhang, Q.; Naradowsky, J.; Miyao, Y.
Self-Emotion Blended Dialogue Generation in Social Simulation Agents Proceedings Article
In: Kawahara, T.; Demberg, V.; Ultes, S.; Inoue, K.; Mehri, S.; Howcroft, D.; Komatani, K. (Ed.): pp. 228–247, Association for Computational Linguistics (ACL), 2024, ISBN: 9798891761612 (ISBN).
Abstract | Links | BibTeX | Tags: Agent behavior, Agents, Computational Linguistics, Decision making, Decisions makings, Dialogue generations, Dialogue strategy, Emotional state, Language Model, Model-driven, Natural language processing systems, Simulation framework, Social psychology, Social simulations, Speech processing, Virtual Reality, Virtual simulation environments
@inproceedings{zhang_self-emotion_2024,
title = {Self-Emotion Blended Dialogue Generation in Social Simulation Agents},
author = {Q. Zhang and J. Naradowsky and Y. Miyao},
editor = {T. Kawahara and V. Demberg and S. Ultes and K. Inoue and S. Mehri and D. Howcroft and K. Komatani},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105017744334&doi=10.18653%2Fv1%2F2024.sigdial-1.21&partnerID=40&md5=f185cfb5554eabfa85e6e956dfe6848e},
doi = {10.18653/v1/2024.sigdial-1.21},
isbn = {9798891761612 (ISBN)},
year = {2024},
date = {2024-01-01},
pages = {228–247},
publisher = {Association for Computational Linguistics (ACL)},
abstract = {When engaging in conversations, dialogue agents in a virtual simulation environment may exhibit their own emotional states that are unrelated to the immediate conversational context, a phenomenon known as self-emotion. This study explores how such self-emotion affects the agents' behaviors in dialogue strategies and decision-making within a large language model (LLM)-driven simulation framework. In a dialogue strategy prediction experiment, we analyze the dialogue strategy choices employed by agents both with and without self-emotion, comparing them to those of humans. The results show that incorporating self-emotion helps agents exhibit more human-like dialogue strategies. In an independent experiment comparing the performance of models fine-tuned on GPT-4 generated dialogue datasets, we demonstrate that self-emotion can lead to better overall naturalness and humanness. Finally, in a virtual simulation environment where agents have discussions on multiple topics, we show that self-emotion of agents can significantly influence the decision-making process of the agents, leading to approximately a 50% change in decisions. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Agent behavior, Agents, Computational Linguistics, Decision making, Decisions makings, Dialogue generations, Dialogue strategy, Emotional state, Language Model, Model-driven, Natural language processing systems, Simulation framework, Social psychology, Social simulations, Speech processing, Virtual Reality, Virtual simulation environments},
pubstate = {published},
tppubtype = {inproceedings}
}
Baldry, M. K.; Happa, J.; Steed, A.; Smith, S.; Glencross, M.
From Embodied Abuse to Mass Disruption: Generative, Inter-Reality Threats in Social, Mixed-Reality Platforms Journal Article
In: Digital Threats: Research and Practice, vol. 5, no. 4, 2024, ISSN: 25765337 (ISSN), (Publisher: Association for Computing Machinery).
Abstract | Links | BibTeX | Tags: Abuse, Augmented Reality, Cyber security, Cybersecurity, Extended reality, Game, Games, Generative adversarial networks, Harassment, Harm, harms, Mixed reality, risk, Social engineering, Social gaming, Social platform, social platforms, Social psychology, Virtual environments, Virtual Reality
@article{baldry_embodied_2024,
title = {From Embodied Abuse to Mass Disruption: Generative, Inter-Reality Threats in Social, Mixed-Reality Platforms},
author = {M. K. Baldry and J. Happa and A. Steed and S. Smith and M. Glencross},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85212265918&doi=10.1145%2F3696015&partnerID=40&md5=3365120749356b35e6a5d947a2c42e11},
doi = {10.1145/3696015},
issn = {25765337 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {Digital Threats: Research and Practice},
volume = {5},
number = {4},
abstract = {Extended Reality (XR) platforms can expose users to novel attacks including embodied abuse and/or AI attacks-at-scale. The expanded attack surfaces of XR technologies may expose users of shared online platforms to psychological/social and physiological harms via embodied interactions with potentially millions of other humans or artificial humans, causing what we define as an inter-reality attack. The past 20 years have demonstrated how social and other harms (e.g., bullying, assault and stalking) can and do shift to digital social media and gaming platforms. XR technologies becoming more mainstream has led to investigations of ethical and technical consequences of these expanded input surfaces. However, there is limited literature that investigates social attacks, particularly towards vulnerable communities, and how AI technologies may accelerate generative attacks-at-scale. This article employs human-centred research methods and a harms-centred Cybersecurity framework to co-design a testbed of socio-technical attack scenarios in XR social gaming platforms. It uses speculative fiction to further extrapolate how these could reach attacks-at-scale by applying generative AI techniques. It develops an Inter-Reality Threat Model to outline how actions in virtual environments can impact on the real-world. As AI capability continues to rapidly develop, this article articulates the urgent need to consider a future where XR-AI attacks-at-scale could become commonplace. © 2024 Elsevier B.V., All rights reserved.},
note = {Publisher: Association for Computing Machinery},
keywords = {Abuse, Augmented Reality, Cyber security, Cybersecurity, Extended reality, Game, Games, Generative adversarial networks, Harassment, Harm, harms, Mixed reality, risk, Social engineering, Social gaming, Social platform, social platforms, Social psychology, Virtual environments, Virtual Reality},
pubstate = {published},
tppubtype = {article}
}