AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Li, H.; Wang, Z.; Liang, W.; Wang, Y.
X’s Day: Personality-Driven Virtual Human Behavior Generation Journal Article
In: IEEE Transactions on Visualization and Computer Graphics, vol. 31, no. 5, pp. 3514–3524, 2025, ISSN: 10772626 (ISSN).
Abstract | Links | BibTeX | Tags: adult, Augmented Reality, Behavior Generation, Chatbots, Computer graphics, computer interface, Contextual Scene, female, human, Human behaviors, Humans, Long-term behavior, male, Novel task, Personality, Personality traits, Personality-driven Behavior, physiology, Social behavior, User-Computer Interface, Users' experiences, Virtual agent, Virtual environments, Virtual humans, Virtual Reality, Young Adult
@article{li_xs_2025,
title = {X’s Day: Personality-Driven Virtual Human Behavior Generation},
author = {H. Li and Z. Wang and W. Liang and Y. Wang},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105003864932&doi=10.1109%2fTVCG.2025.3549574&partnerID=40&md5=a865bbd2b0fa964a4f0f4190955dc787},
doi = {10.1109/TVCG.2025.3549574},
issn = {10772626 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Transactions on Visualization and Computer Graphics},
volume = {31},
number = {5},
pages = {3514–3524},
abstract = {Developing convincing and realistic virtual human behavior is essential for enhancing user experiences in virtual reality (VR) and augmented reality (AR) settings. This paper introduces a novel task focused on generating long-term behaviors for virtual agents, guided by specific personality traits and contextual elements within 3D environments. We present a comprehensive framework capable of autonomously producing daily activities autoregressively. By modeling the intricate connections between personality characteristics and observable activities, we establish a hierarchical structure of Needs, Task, and Activity levels. Integrating a Behavior Planner and a World State module allows for the dynamic sampling of behaviors using large language models (LLMs), ensuring that generated activities remain relevant and responsive to environmental changes. Extensive experiments validate the effectiveness and adaptability of our approach across diverse scenarios. This research makes a significant contribution to the field by establishing a new paradigm for personalized and context-aware interactions with virtual humans, ultimately enhancing user engagement in immersive applications. Our project website is at: https://behavior.agent-x.cn/. © 2025 IEEE. All rights reserved,},
keywords = {adult, Augmented Reality, Behavior Generation, Chatbots, Computer graphics, computer interface, Contextual Scene, female, human, Human behaviors, Humans, Long-term behavior, male, Novel task, Personality, Personality traits, Personality-driven Behavior, physiology, Social behavior, User-Computer Interface, Users' experiences, Virtual agent, Virtual environments, Virtual humans, Virtual Reality, Young Adult},
pubstate = {published},
tppubtype = {article}
}
Song, T.; Pabst, F.; Eck, U.; Navab, N.
Enhancing Patient Acceptance of Robotic Ultrasound through Conversational Virtual Agent and Immersive Visualizations Journal Article
In: IEEE Transactions on Visualization and Computer Graphics, vol. 31, no. 5, pp. 2901–2911, 2025, ISSN: 10772626 (ISSN).
Abstract | Links | BibTeX | Tags: 3D reconstruction, adult, Augmented Reality, Computer graphics, computer interface, echography, female, human, Humans, Imaging, Intelligent robots, Intelligent virtual agents, Language Model, male, Medical robotics, Middle Aged, Mixed reality, Patient Acceptance of Health Care, patient attitude, Patient comfort, procedures, Real-world, Reality visualization, Robotic Ultrasound, Robotics, Three-Dimensional, three-dimensional imaging, Trust and Acceptance, Ultrasonic applications, Ultrasonic equipment, Ultrasonography, Ultrasound probes, User-Computer Interface, Virtual agent, Virtual assistants, Virtual environments, Virtual Reality, Visual languages, Visualization, Young Adult
@article{song_enhancing_2025,
title = {Enhancing Patient Acceptance of Robotic Ultrasound through Conversational Virtual Agent and Immersive Visualizations},
author = {T. Song and F. Pabst and U. Eck and N. Navab},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105003687673&doi=10.1109%2fTVCG.2025.3549181&partnerID=40&md5=1d46569933582ecf5e967f0794aafc07},
doi = {10.1109/TVCG.2025.3549181},
issn = {10772626 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Transactions on Visualization and Computer Graphics},
volume = {31},
number = {5},
pages = {2901–2911},
abstract = {Robotic ultrasound systems have the potential to improve medical diagnostics, but patient acceptance remains a key challenge. To address this, we propose a novel system that combines an AI-based virtual agent, powered by a large language model (LLM), with three mixed reality visualizations aimed at enhancing patient comfort and trust. The LLM enables the virtual assistant to engage in natural, conversational dialogue with patients, answering questions in any format and offering real-time reassurance, creating a more intelligent and reliable interaction. The virtual assistant is animated as controlling the ultrasound probe, giving the impression that the robot is guided by the assistant. The first visualization employs augmented reality (AR), allowing patients to see the real world and the robot with the virtual avatar superimposed. The second visualization is an augmented virtuality (AV) environment, where the real-world body part being scanned is visible, while a 3D Gaussian Splatting reconstruction of the room, excluding the robot, forms the virtual environment. The third is a fully immersive virtual reality (VR) experience, featuring the same 3D reconstruction but entirely virtual, where the patient sees a virtual representation of their body being scanned in a robot-free environment. In this case, the virtual ultrasound probe, mirrors the movement of the probe controlled by the robot, creating a synchronized experience as it touches and moves over the patient's virtual body. We conducted a comprehensive agent-guided robotic ultrasound study with all participants, comparing these visualizations against a standard robotic ultrasound procedure. Results showed significant improvements in patient trust, acceptance, and comfort. Based on these findings, we offer insights into designing future mixed reality visualizations and virtual agents to further enhance patient comfort and acceptance in autonomous medical procedures. © 1995-2012 IEEE.},
keywords = {3D reconstruction, adult, Augmented Reality, Computer graphics, computer interface, echography, female, human, Humans, Imaging, Intelligent robots, Intelligent virtual agents, Language Model, male, Medical robotics, Middle Aged, Mixed reality, Patient Acceptance of Health Care, patient attitude, Patient comfort, procedures, Real-world, Reality visualization, Robotic Ultrasound, Robotics, Three-Dimensional, three-dimensional imaging, Trust and Acceptance, Ultrasonic applications, Ultrasonic equipment, Ultrasonography, Ultrasound probes, User-Computer Interface, Virtual agent, Virtual assistants, Virtual environments, Virtual Reality, Visual languages, Visualization, Young Adult},
pubstate = {published},
tppubtype = {article}
}
Stacchio, L.; Balloni, E.; Frontoni, E.; Paolanti, M.; Zingaretti, P.; Pierdicca, R.
MineVRA: Exploring the Role of Generative AI-Driven Content Development in XR Environments through a Context-Aware Approach Journal Article
In: IEEE Transactions on Visualization and Computer Graphics, vol. 31, no. 5, pp. 3602–3612, 2025, ISSN: 10772626 (ISSN).
Abstract | Links | BibTeX | Tags: adult, Article, Artificial intelligence, Computer graphics, Computer vision, Content Development, Contents development, Context-Aware, Context-aware approaches, Extended reality, female, Generative adversarial networks, Generative AI, generative artificial intelligence, human, Human-in-the-loop, Immersive, Immersive environment, male, Multi-modal, User need, Virtual environments, Virtual Reality
@article{stacchio_minevra_2025,
title = {MineVRA: Exploring the Role of Generative AI-Driven Content Development in XR Environments through a Context-Aware Approach},
author = {L. Stacchio and E. Balloni and E. Frontoni and M. Paolanti and P. Zingaretti and R. Pierdicca},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105003746367&doi=10.1109%2fTVCG.2025.3549160&partnerID=40&md5=70b162b574eebbb0cb71db871aa787e1},
doi = {10.1109/TVCG.2025.3549160},
issn = {10772626 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Transactions on Visualization and Computer Graphics},
volume = {31},
number = {5},
pages = {3602–3612},
abstract = {The convergence of Artificial Intelligence (AI), Computer Vision (CV), Computer Graphics (CG), and Extended Reality (XR) is driving innovation in immersive environments. A key challenge in these environments is the creation of personalized 3D assets, traditionally achieved through manual modeling, a time-consuming process that often fails to meet individual user needs. More recently, Generative AI (GenAI) has emerged as a promising solution for automated, context-aware content generation. In this paper, we present MineVRA (Multimodal generative artificial iNtelligence for contExt-aware Virtual Reality Assets), a novel Human-In-The-Loop (HITL) XR framework that integrates GenAI to facilitate coherent and adaptive 3D content generation in immersive scenarios. To evaluate the effectiveness of this approach, we conducted a comparative user study analyzing the performance and user satisfaction of GenAI-generated 3D objects compared to those generated by Sketchfab in different immersive contexts. The results suggest that GenAI can significantly complement traditional 3D asset libraries, with valuable design implications for the development of human-centered XR environments. © 1995-2012 IEEE.},
keywords = {adult, Article, Artificial intelligence, Computer graphics, Computer vision, Content Development, Contents development, Context-Aware, Context-aware approaches, Extended reality, female, Generative adversarial networks, Generative AI, generative artificial intelligence, human, Human-in-the-loop, Immersive, Immersive environment, male, Multi-modal, User need, Virtual environments, Virtual Reality},
pubstate = {published},
tppubtype = {article}
}
Hassoulas, A.; Crawford, O.; Hemrom, S.; Almeida, A.; Coffey, M. J.; Hodgson, M.; Leveridge, B.; Karwa, D.; Lethbridge, A.; Williams, H.; Voisey, A.; Reed, K.; Patel, S.; Hart, K.; Shaw, H.
A pilot study investigating the efficacy of technology enhanced case based learning (CBL) in small group teaching Journal Article
In: Scientific Reports, vol. 15, no. 1, 2025, ISSN: 20452322 (ISSN).
Abstract | Links | BibTeX | Tags: coronavirus disease 2019, Covid-19, epidemiology, female, human, Humans, Learning, male, Medical, Medical student, Pilot Projects, pilot study, problem based learning, Problem-Based Learning, procedures, SARS-CoV-2, Severe acute respiratory syndrome coronavirus 2, Students, Teaching, Virtual Reality
@article{hassoulas_pilot_2025,
title = {A pilot study investigating the efficacy of technology enhanced case based learning (CBL) in small group teaching},
author = {A. Hassoulas and O. Crawford and S. Hemrom and A. Almeida and M. J. Coffey and M. Hodgson and B. Leveridge and D. Karwa and A. Lethbridge and H. Williams and A. Voisey and K. Reed and S. Patel and K. Hart and H. Shaw},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105004223025&doi=10.1038%2fs41598-025-99764-5&partnerID=40&md5=8588cac4c3ffe437e667ba4373e010ec},
doi = {10.1038/s41598-025-99764-5},
issn = {20452322 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {Scientific Reports},
volume = {15},
number = {1},
abstract = {The recent paradigm shift in teaching provision within higher education, following the COVID-19 pandemic, has led to blended models of learning prevailing in the pedagogic literature and in education practice. This shift has also resulted in an abundance of tools and technologies coming to market. Whilst the value of integrating technology into teaching and assessment has been well-established in the literature, the magnitude of choice available to educators and to students can be overwhelming. The current pilot investigated the feasibility of integrating key technologies in delivering technology-enhanced learning (TEL) case-based learning (CBL) within a sample of year two medical students. The cohort was selected at random, as was the control group receiving conventional CBL. Both groups were matched on prior academic performance. The TEL-CBL group received (1) in-person tutorials delivered within an immersive learning suite, (2) access to 3D anatomy software to explore during their self-directed learning time, (3) virtual reality (VR) guided anatomy exploration during tutorials, (4) access to a generative AI-based simulated virtual patient repository to practice key skills such as communication and history taking, and (5) an immersive medical emergency simulation. Metrics assessed included formative academic performance, student learning experience, and confidence in relation to communication and clinical skills. The results revealed that the TEL-CBL group outperformed their peers in successive formative assessments (p < 0.05), engaged thoroughly with the technologies at their disposal, and reported that these technologies enhanced their learning experience. Furthermore, students reported that access to the GenAI-simulated virtual patient platform and the immersive medical emergency simulation improved their clinical confidence and gave them a useful insight into what they can expect during the clinical phase of their medical education. The results are discussed in relation to the advantages that key emerging technologies may play in enhancing student performance, experience and confidence. © The Author(s) 2025.},
keywords = {coronavirus disease 2019, Covid-19, epidemiology, female, human, Humans, Learning, male, Medical, Medical student, Pilot Projects, pilot study, problem based learning, Problem-Based Learning, procedures, SARS-CoV-2, Severe acute respiratory syndrome coronavirus 2, Students, Teaching, Virtual Reality},
pubstate = {published},
tppubtype = {article}
}
2024
Sheehy, L.; Bouchard, S.; Kakkar, A.; Hakim, R. El; Lhoest, J.; Frank, A.
Development and Initial Testing of an Artificial Intelligence-Based Virtual Reality Companion for People Living with Dementia in Long-Term Care Journal Article
In: Journal of Clinical Medicine, vol. 13, no. 18, 2024, ISSN: 20770383 (ISSN).
Abstract | Links | BibTeX | Tags: aged, Article, Artificial intelligence, cognitive decline, cognitive impairment, compassion, conversation, Dementia, Elderly, female, human, large language models, long term care, long-term care, major clinical study, male, program acceptability, program feasibility, reaction time, reminiscence, speech discrimination, very elderly, Virtual Reality
@article{sheehy_development_2024,
title = {Development and Initial Testing of an Artificial Intelligence-Based Virtual Reality Companion for People Living with Dementia in Long-Term Care},
author = {L. Sheehy and S. Bouchard and A. Kakkar and R. El Hakim and J. Lhoest and A. Frank},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205071099&doi=10.3390%2fjcm13185574&partnerID=40&md5=844732ff858a0d5feb0a95a54093ad4d},
doi = {10.3390/jcm13185574},
issn = {20770383 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {Journal of Clinical Medicine},
volume = {13},
number = {18},
abstract = {Background/Objectives: Feelings of loneliness are common in people living with dementia (PLWD) in long-term care (LTC). The goals of this study were to describe the development of a novel virtual companion for PLWD living in LTC and assess its feasibility and acceptability. Methods: The computer-generated virtual companion, presented using a head-mounted virtual reality display, was developed in two stages. In Stage 1, the virtual companion asked questions designed to encourage conversation and reminiscence. In Stage 2, more powerful artificial intelligence tools allowed the virtual companion to engage users in nuanced discussions on any topic. PLWD in LTC tested the application at each stage to assess feasibility and acceptability. Results: Ten PLWD living in LTC participated in Stage 1 (4 men and 6 women; average 82 years old) and Stage 2 (2 men and 8 women; average 87 years old). Session lengths ranged from 0:00 to 5:30 min in Stage 1 and 0:00 to 53:50 min in Stage 2. Speech recognition issues and a limited repertoire of questions limited acceptance in Stage 1. Enhanced conversational ability in Stage 2 led to intimate and meaningful conversations with many participants. Many users found the head-mounted display heavy. There were no complaints of simulator sickness. The virtual companion was best suited to PLWD who could engage in reciprocal conversation. After Stage 2, response latency was identified as an opportunity for improvement in future versions. Conclusions: Virtual reality and artificial intelligence can be used to create a virtual companion that is acceptable and enjoyable to some PLWD living in LTC. Ongoing innovations in hardware and software will allow future iterations to provide more natural conversational interaction and an enhanced social experience. © 2024 by the authors.},
keywords = {aged, Article, Artificial intelligence, cognitive decline, cognitive impairment, compassion, conversation, Dementia, Elderly, female, human, large language models, long term care, long-term care, major clinical study, male, program acceptability, program feasibility, reaction time, reminiscence, speech discrimination, very elderly, Virtual Reality},
pubstate = {published},
tppubtype = {article}
}