AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Li, Z.; Zhang, H.; Peng, C.; Peiris, R.
Exploring Large Language Model-Driven Agents for Environment-Aware Spatial Interactions and Conversations in Virtual Reality Role-Play Scenarios Proceedings Article
In: Proc. - IEEE Conf. Virtual Real. 3D User Interfaces, VR, pp. 1–11, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331536459 (ISBN).
Abstract | Links | BibTeX | Tags: Chatbots, Computer simulation languages, Context- awareness, context-awareness, Digital elevation model, Generative AI, Human-AI Interaction, Language Model, Large language model, large language models, Model agents, Role-play simulation, role-play simulations, Role-plays, Spatial interaction, Virtual environments, Virtual Reality, Virtual-reality environment
@inproceedings{li_exploring_2025,
title = {Exploring Large Language Model-Driven Agents for Environment-Aware Spatial Interactions and Conversations in Virtual Reality Role-Play Scenarios},
author = {Z. Li and H. Zhang and C. Peng and R. Peiris},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105002706893&doi=10.1109%2FVR59515.2025.00025&partnerID=40&md5=1987c128f6ec4bd24011388ef9ece179},
doi = {10.1109/VR59515.2025.00025},
isbn = {9798331536459 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Conf. Virtual Real. 3D User Interfaces, VR},
pages = {1–11},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Recent research has begun adopting Large Language Model (LLM) agents to enhance Virtual Reality (VR) interactions, creating immersive chatbot experiences. However, while current studies focus on generating dialogue from user speech inputs, their abilities to generate richer experiences based on the perception of LLM agents' VR environments and interaction cues remain unexplored. Hence, in this work, we propose an approach that enables LLM agents to perceive virtual environments and generate environment-aware interactions and conversations for an embodied human-AI interaction experience in VR environments. Here, we define a schema for describing VR environments and their interactions through text prompts. We evaluate the performance of our method through five role-play scenarios created using our approach in a study with 14 participants. The findings discuss the opportunities and challenges of our proposed approach for developing environment-aware LLM agents that facilitate spatial interactions and conversations within VR role-play scenarios. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Chatbots, Computer simulation languages, Context- awareness, context-awareness, Digital elevation model, Generative AI, Human-AI Interaction, Language Model, Large language model, large language models, Model agents, Role-play simulation, role-play simulations, Role-plays, Spatial interaction, Virtual environments, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Gao, H.; Xie, Y.; Kasneci, E.
PerVRML: ChatGPT-Driven Personalized VR Environments for Machine Learning Education Journal Article
In: International Journal of Human-Computer Interaction, 2025, ISSN: 10447318 (ISSN); 15327590 (ISSN), (Publisher: Taylor and Francis Ltd.).
Abstract | Links | BibTeX | Tags: Backpropagation, ChatGPT, Curricula, Educational robots, Immersive learning, Interactive learning, Language Model, Large language model, large language models, Learning mode, Machine learning education, Machine-learning, Personalized learning, Support vector machines, Teaching, Virtual Reality, Virtual-reality environment, Virtualization
@article{gao_pervrml_2025,
title = {PerVRML: ChatGPT-Driven Personalized VR Environments for Machine Learning Education},
author = {H. Gao and Y. Xie and E. Kasneci},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005776517&doi=10.1080%2F10447318.2025.2504188&partnerID=40&md5=27accdeba3e1e2202fc1102053d54b7c},
doi = {10.1080/10447318.2025.2504188},
issn = {10447318 (ISSN); 15327590 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {International Journal of Human-Computer Interaction},
abstract = {The advent of large language models (LLMs) such as ChatGPT has demonstrated significant potential for advancing educational technologies. Recently, growing interest has emerged in integrating ChatGPT with virtual reality (VR) to provide interactive and dynamic learning environments. This study explores the effectiveness of ChatGTP-driven VR in facilitating machine learning education through PerVRML. PerVRML incorporates a ChatGPT-powered avatar that provides real-time assistance and uses LLMs to personalize learning paths based on various sensor data from VR. A between-subjects design was employed to compare two learning modes: personalized and non-personalized. Quantitative data were collected from assessments, user experience surveys, and interaction metrics. The results indicate that while both learning modes supported learning effectively, ChatGPT-powered personalization significantly improved learning outcomes and had distinct impacts on user feedback. These findings underscore the potential of ChatGPT-enhanced VR to deliver adaptive and personalized educational experiences. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Taylor and Francis Ltd.},
keywords = {Backpropagation, ChatGPT, Curricula, Educational robots, Immersive learning, Interactive learning, Language Model, Large language model, large language models, Learning mode, Machine learning education, Machine-learning, Personalized learning, Support vector machines, Teaching, Virtual Reality, Virtual-reality environment, Virtualization},
pubstate = {published},
tppubtype = {article}
}
Lau, K. H. C.; Bozkir, E.; Gao, H.; Kasneci, E.
Evaluating Usability and Engagement of Large Language Models in Virtual Reality for Traditional Scottish Curling Proceedings Article
In: A., Del Bue; C., Canton; J., Pont-Tuset; T., Tommasi (Ed.): Lect. Notes Comput. Sci., pp. 177–195, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-303191571-0 (ISBN).
Abstract | Links | BibTeX | Tags: Chatbots, Cultural heritages, Digital Cultural Heritage, Digital cultural heritages, Educational robots, Engineering education, Heritage education, Historic Preservation, Language Model, Large language model, large language models, Learning outcome, Model-based OPC, Usability engineering, User Engagement, Virtual Reality, Virtual-reality environment, Virtualization
@inproceedings{lau_evaluating_2025,
title = {Evaluating Usability and Engagement of Large Language Models in Virtual Reality for Traditional Scottish Curling},
author = {K. H. C. Lau and E. Bozkir and H. Gao and E. Kasneci},
editor = {Del Bue A. and Canton C. and Pont-Tuset J. and Tommasi T.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105006905979&doi=10.1007%2f978-3-031-91572-7_11&partnerID=40&md5=8a81fb09ff54e57b9429660a8898149a},
doi = {10.1007/978-3-031-91572-7_11},
isbn = {03029743 (ISSN); 978-303191571-0 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15628 LNCS},
pages = {177–195},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {This paper explores the innovative application of Large Language Models (LLMs) in Virtual Reality (VR) environments to promote heritage education, focusing on traditional Scottish curling presented in the game “Scottish Bonspiel VR”. Our study compares the effectiveness of LLM-based chatbots with pre-defined scripted chatbots, evaluating key criteria such as usability, user engagement, and learning outcomes. The results show that LLM-based chatbots significantly improve interactivity and engagement, creating a more dynamic and immersive learning environment. This integration helps document and preserve cultural heritage and enhances dissemination processes, which are crucial for safeguarding intangible cultural heritage (ICH) amid environmental changes. Furthermore, the study highlights the potential of novel technologies in education to provide immersive experiences that foster a deeper appreciation of cultural heritage. These findings support the wider application of LLMs and VR in cultural education to address global challenges and promote sustainable practices to preserve and enhance cultural heritage. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.},
keywords = {Chatbots, Cultural heritages, Digital Cultural Heritage, Digital cultural heritages, Educational robots, Engineering education, Heritage education, Historic Preservation, Language Model, Large language model, large language models, Learning outcome, Model-based OPC, Usability engineering, User Engagement, Virtual Reality, Virtual-reality environment, Virtualization},
pubstate = {published},
tppubtype = {inproceedings}
}
López-Ozieblo, R.; Jiandong, D. S.; Techanamurthy, U.; Geng, H.; Nurgissayeva, A.
Enhancing AI Literacy through Immersive VR: Evaluating Pedagogical Design and GenAI Integration Proceedings Article
In: pp. 718–723, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331511661 (ISBN).
Abstract | Links | BibTeX | Tags: AI Literacy, Artificial intelligence, Behavioral Research, Classlet platform, E-Learning, Educational settings, Emerging technologies, Engineering education, Experiential learning, GenAI avatar, GenAI Avatars, Immersive virtual reality, Interactive computer graphics, Pedagogical designs, Pedagogical Innovation, Regression analysis, Teaching, Virtual Reality, Virtual-reality environment
@inproceedings{lopez-ozieblo_enhancing_2025,
title = {Enhancing AI Literacy through Immersive VR: Evaluating Pedagogical Design and GenAI Integration},
author = {R. López-Ozieblo and D. S. Jiandong and U. Techanamurthy and H. Geng and A. Nurgissayeva},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105013538409&doi=10.1109%2FCSTE64638.2025.11092268&partnerID=40&md5=a963d754ceaa73f360d9678d346a7686},
doi = {10.1109/CSTE64638.2025.11092268},
isbn = {9798331511661 (ISBN)},
year = {2025},
date = {2025-01-01},
pages = {718–723},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {As AI continues to reshape industries, enhancing AI literacy is crucial for empowering learners to interact confidently and critically with emerging technologies. Virtual Reality (VR) offers a way to bridge theoretical knowledge with practical application but integrating VR into educational settings struggles with technical and pedagogical challenges. This study investigates how immersive VR environments can be optimized to enhance AI literacy and identifies key factors driving students' intent to adopt these technologies. Using Classlet - a VR platform that integrates interactive multimodal tasks, narrative-driven activities, and GenAI avatar interactions - we created a virtual office where learners engaged in research tasks and simulation scenarios with instructor-customized prompts. Our mixed-methods approach, involving participants from Hong Kong and Malaysia, focused on AI literacy within contexts such as Fast Fashion and European society. Regression analyses revealed that overall intent is strongly predicted by composite enjoyment, perceived performance, and behavioral control (R2 = 0.803). Post-AI literacy self-assessments were predicted by AI self-efficacy and enjoyment ( R2 = 0.421). However, female participants reported lower scores on AI efficacy (p = 0.042), suggesting baseline differences that warrant further investigation. Qualitative insights show the immersive and engaging nature of the experience while highlighting the need for further GenAI prompt designs for elaborative and bidirectional interactions. © 2025 Elsevier B.V., All rights reserved.},
keywords = {AI Literacy, Artificial intelligence, Behavioral Research, Classlet platform, E-Learning, Educational settings, Emerging technologies, Engineering education, Experiential learning, GenAI avatar, GenAI Avatars, Immersive virtual reality, Interactive computer graphics, Pedagogical designs, Pedagogical Innovation, Regression analysis, Teaching, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Ding, S.; Yalla, J. P.; Chen, Y.
Demo Abstract: RAG-Driven 3D Question Answering in Edge-Assisted Virtual Reality Proceedings Article
In: Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331543709 (ISBN).
Abstract | Links | BibTeX | Tags: Edge computing, Edge server, Interface states, Knowledge database, Language Model, Local knowledge, Office environments, Question Answering, Real- time, User interaction, User interfaces, Virtual environments, Virtual Reality, Virtual reality system, Virtual-reality environment
@inproceedings{ding_demo_2025,
title = {Demo Abstract: RAG-Driven 3D Question Answering in Edge-Assisted Virtual Reality},
author = {S. Ding and J. P. Yalla and Y. Chen},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105017970015&doi=10.1109%2FINFOCOMWKSHPS65812.2025.11152992&partnerID=40&md5=0e079de018ae9c4a564b98c304a9ea6c},
doi = {10.1109/INFOCOMWKSHPS65812.2025.11152992},
isbn = {9798331543709 (ISBN)},
year = {2025},
date = {2025-01-01},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {The rapid development of large language models (LLMs) has created new opportunities in 3D question answering (3D-QA) for virtual reality (VR). 3D-QA enhances user interaction by answering questions about virtual environments. However, performing 3D-QA in VR systems using LLM-based approaches is computation-intensive. Furthermore, general LLMs tend to generate inaccurate responses as they lack context-specific information in VR environments. To mitigate these limitations, we propose OfficeVR-QA, a 3D-QA framework for edge-assisted VR to alleviate the resource constraints of VR devices with the help of edge servers, demonstrated in a virtual office environment. To improve the accuracy of the generated answers, the edge server of OfficeVR-QA hosts retrieval-augmented generation (RAG) that augments LLMs with external knowledge retrieved from a local knowledge database extracted from VR environments and users. During an interactive demo, OfficeVR-QA will continuously update the local knowledge database in real time by transmitting participants' position and orientation data to the edge server, enabling adaptive responses to changes in the participants' states. Participants will navigate a VR office environment, interact with a VR user interface to ask questions, and observe the accuracy of dynamic responses based on their real-time state changes. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Edge computing, Edge server, Interface states, Knowledge database, Language Model, Local knowledge, Office environments, Question Answering, Real- time, User interaction, User interfaces, Virtual environments, Virtual Reality, Virtual reality system, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Kurai, R.; Hiraki, T.; Hiroi, Y.; Hirao, Y.; Perusquía-Hernández, M.; Uchiyama, H.; Kiyokawa, K.
MagicItem: Dynamic Behavior Design of Virtual Objects With Large Language Models in a Commercial Metaverse Platform Journal Article
In: IEEE Access, vol. 13, pp. 19132–19143, 2025, ISSN: 21693536 (ISSN), (Publisher: Institute of Electrical and Electronics Engineers Inc.).
Abstract | Links | BibTeX | Tags: Behavior design, Code programming, Computer simulation languages, Dynamic behaviors, Language Model, Large-language model, Low-code programming, Metaverse platform, Metaverses, Virtual addresses, Virtual environments, Virtual objects, Virtual Reality, Virtual-reality environment
@article{kurai_magicitem_2025,
title = {MagicItem: Dynamic Behavior Design of Virtual Objects With Large Language Models in a Commercial Metaverse Platform},
author = {R. Kurai and T. Hiraki and Y. Hiroi and Y. Hirao and M. Perusquía-Hernández and H. Uchiyama and K. Kiyokawa},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85216011970&doi=10.1109%2FACCESS.2025.3530439&partnerID=40&md5=6de2e69c95854cb0860a95d0f4246d8d},
doi = {10.1109/ACCESS.2025.3530439},
issn = {21693536 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Access},
volume = {13},
pages = {19132–19143},
abstract = {To create rich experiences in virtual reality (VR) environments, it is essential to define the behavior of virtual objects through programming. However, programming in 3D spaces requires a wide range of background knowledge and programming skills. Although Large Language Models (LLMs) have provided programming support, they are still primarily aimed at programmers. In metaverse platforms, where many users inhabit VR spaces, most users are unfamiliar with programming, making it difficult for them to modify the behavior of objects in the VR environment easily. Existing LLM-based script generation methods for VR spaces require multiple lengthy iterations to implement the desired behaviors and are difficult to integrate into the operation of metaverse platforms. To address this issue, we propose a tool that generates behaviors for objects in VR spaces from natural language within Cluster, a metaverse platform with a large user base. By integrating LLMs with the Cluster Script provided by this platform, we enable users with limited programming experience to define object behaviors within the platform freely. We have also integrated our tool into a commercial metaverse platform and are conducting online experiments with 63 general users of the platform. The experiments show that even users with no programming background can successfully generate behaviors for objects in VR spaces, resulting in a highly satisfying system. Our research contributes to democratizing VR content creation by enabling non-programmers to design dynamic behaviors for virtual objects in metaverse platforms. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Institute of Electrical and Electronics Engineers Inc.},
keywords = {Behavior design, Code programming, Computer simulation languages, Dynamic behaviors, Language Model, Large-language model, Low-code programming, Metaverse platform, Metaverses, Virtual addresses, Virtual environments, Virtual objects, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {article}
}
2024
White, M.; Banerjee, N. K.; Banerjee, S.
VRcabulary: A VR Environment for Reinforced Language Learning via Multi-Modular Design Proceedings Article
In: Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR, pp. 315–319, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-835037202-1 (ISBN).
Abstract | Links | BibTeX | Tags: 'current, E-Learning, Foreign language, Immersive, Instructional modules, Language learning, Modular designs, Modulars, Multi-modular, Reinforcement, Second language, Virtual Reality, Virtual-reality environment
@inproceedings{white_vrcabulary_2024,
title = {VRcabulary: A VR Environment for Reinforced Language Learning via Multi-Modular Design},
author = {M. White and N. K. Banerjee and S. Banerjee},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85187241160&doi=10.1109%2fAIxVR59861.2024.00053&partnerID=40&md5=4d8ff8ac5c6aa8336a571ba906fe0f5d},
doi = {10.1109/AIxVR59861.2024.00053},
isbn = {979-835037202-1 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR},
pages = {315–319},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {We demonstrate VRcabulary, a hierarchical modular virtual reality (VR) environment for language learning (LL). Current VR LL apps lack the benefit of reinforcement presented in typical classroom environments. Apps either introduce content in the second language and lack retention testing, or provide gamification without an in-environment instructional component. To acquire reinforcement of knowledge, the learner needs to visit the app multiple times, increasing the potential for monotony. In VRcabulary, we introduce a multi-modular hierarchical design with 3 modules - an instructional module providing AI-generated audio playbacks of object names, a practice module enabling interaction based reinforcement of object names in response to audio playback, and an exam module enabling retention testing through interaction. To incentivize engagement by reducing monotony, we keep the designs of each modules distinct. We provide sequential object presentations in the instructional module and multiple object assortments in the practice and exam modules. We provide feedback and multiple trials in the practice module, but eliminate them from the exam module. We expect cross-module diversity of interaction in VRcabulary to enhance engagement in VR LL. © 2024 IEEE.},
keywords = {'current, E-Learning, Foreign language, Immersive, Instructional modules, Language learning, Modular designs, Modulars, Multi-modular, Reinforcement, Second language, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Chheang, V.; Sharmin, S.; Marquez-Hernandez, R.; Patel, M.; Rajasekaran, D.; Caulfield, G.; Kiafar, B.; Li, J.; Kullu, P.; Barmaki, R. L.
Towards Anatomy Education with Generative AI-based Virtual Assistants in Immersive Virtual Reality Environments Proceedings Article
In: Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR, pp. 21–30, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 9798350372021 (ISBN).
Abstract | Links | BibTeX | Tags: 3-D visualization systems, Anatomy education, Anatomy educations, Cognitive complexity, E-Learning, Embodied virtual assistant, Embodied virtual assistants, Generative AI, generative artificial intelligence, Human computer interaction, human-computer interaction, Immersive virtual reality, Interactive 3d visualizations, Knowledge Management, Medical education, Three dimensional computer graphics, Verbal communications, Virtual assistants, Virtual Reality, Virtual-reality environment
@inproceedings{chheang_towards_2024,
title = {Towards Anatomy Education with Generative AI-based Virtual Assistants in Immersive Virtual Reality Environments},
author = {V. Chheang and S. Sharmin and R. Marquez-Hernandez and M. Patel and D. Rajasekaran and G. Caulfield and B. Kiafar and J. Li and P. Kullu and R. L. Barmaki},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85187216893&doi=10.1109%2FAIxVR59861.2024.00011&partnerID=40&md5=9b2e2671cdf57b4df3e4ac8a32fa4014},
doi = {10.1109/AIxVR59861.2024.00011},
isbn = {9798350372021 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR},
pages = {21–30},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Virtual reality (VR) and interactive 3D visualization systems have enhanced educational experiences and environments, particularly in complicated subjects such as anatomy education. VR-based systems surpass the potential limitations of traditional training approaches in facilitating interactive engagement among students. However, research on embodied virtual assistants that leverage generative artificial intelligence (AI) and verbal communication in the anatomy education context is underrepresented. In this work, we introduce a VR environment with a generative AI-embodied virtual assistant to support participants in responding to varying cognitive complexity anatomy questions and enable verbal communication. We assessed the technical efficacy and usability of the proposed environment in a pilot user study with 16 participants. We conducted a within-subject design for virtual assistant configuration (avatar- and screen-based), with two levels of cognitive complexity (knowledge- and analysis-based). The results reveal a significant difference in the scores obtained from knowledge- and analysis-based questions in relation to avatar configuration. Moreover, results provide insights into usability, cognitive task load, and the sense of presence in the proposed virtual assistant configurations. Our environment and results of the pilot study offer potential benefits and future research directions beyond medical education, using generative AI and embodied virtual agents as customized virtual conversational assistants. © 2024 Elsevier B.V., All rights reserved.},
keywords = {3-D visualization systems, Anatomy education, Anatomy educations, Cognitive complexity, E-Learning, Embodied virtual assistant, Embodied virtual assistants, Generative AI, generative artificial intelligence, Human computer interaction, human-computer interaction, Immersive virtual reality, Interactive 3d visualizations, Knowledge Management, Medical education, Three dimensional computer graphics, Verbal communications, Virtual assistants, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Liu, P.; Kitson, A.; Picard-Deland, C.; Carr, M.; Liu, S.; LC, R.; Chen, C.
Virtual Dream Reliving: Exploring Generative AI in Immersive Environment for Dream Re-experiencing Proceedings Article
In: Conf Hum Fact Comput Syst Proc, Association for Computing Machinery, 2024, ISBN: 9798400703317 (ISBN).
Abstract | Links | BibTeX | Tags: Dream Re-experiencing, Dreamwork Engineering, Fundamental component, Generative AI, Immersive environment, Interactivity, Key elements, Personal Insight, Scientific Creativity, Virtual Reality, Virtual-reality environment
@inproceedings{liu_virtual_2024,
title = {Virtual Dream Reliving: Exploring Generative AI in Immersive Environment for Dream Re-experiencing},
author = {P. Liu and A. Kitson and C. Picard-Deland and M. Carr and S. Liu and R. LC and C. Chen},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85194182384&doi=10.1145%2F3613905.3644054&partnerID=40&md5=89a198e59bf383f9142703189035f0fd},
doi = {10.1145/3613905.3644054},
isbn = {9798400703317 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Conf Hum Fact Comput Syst Proc},
publisher = {Association for Computing Machinery},
abstract = {Dreaming is a fundamental component of the human experience. Modern-day psychologists and neuroscientists use "dreamwork" to describe a variety of strategies that deepen and engage with dreams. Re-experiencing the dream as if reliving the memory, feelings, and bodily sensations from the dream is a key element shared by many dreamwork practices. In this paper, we propose the concept of "dreamwork engineering" by creating a system enabling dream re-experiencing in a virtual reality environment through generative AI. Through an autoethnographic study, the first author documented his own dreams and relived his dream experiences for two weeks. Based on our results, we propose a technologyaided dreamwork framework, where technology could potentially augment traditional dreamwork methods through spatiality and movement, interactivity and abstract anchor. We further highlight the collaborative role of technology in dreamwork and advocate that the scientific community could also benefit from dreaming and dreamwork for scientific creativity. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Dream Re-experiencing, Dreamwork Engineering, Fundamental component, Generative AI, Immersive environment, Interactivity, Key elements, Personal Insight, Scientific Creativity, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Wang, Y.; Zhang, Y.
Enhancing Cognitive Recall in Dementia Patients: Integrating Generative AI with Virtual Reality for Behavioral and Memory Rehabilitation Proceedings Article
In: ACM Int. Conf. Proc. Ser., pp. 86–91, Association for Computing Machinery, 2024, ISBN: 979-840071806-9 (ISBN).
Abstract | Links | BibTeX | Tags: AI, Cognitive rehabilitation, Cognitive stimulations, Dementia patients, Electronic health record, Firebase, Generalisation, Neurodegenerative diseases, Non visuals, Patient rehabilitation, Rehabilitation projects, Virtual environments, Virtual Reality, Virtual-reality environment, Visual memory, Visual-spatial, VR
@inproceedings{wang_enhancing_2024,
title = {Enhancing Cognitive Recall in Dementia Patients: Integrating Generative AI with Virtual Reality for Behavioral and Memory Rehabilitation},
author = {Y. Wang and Y. Zhang},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205444838&doi=10.1145%2f3686540.3686552&partnerID=40&md5=1577754660fddd936254fc78586e6a17},
doi = {10.1145/3686540.3686552},
isbn = {979-840071806-9 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {ACM Int. Conf. Proc. Ser.},
pages = {86–91},
publisher = {Association for Computing Machinery},
abstract = {In this Project, we developed a cognitive rehabilitation program for dementia patients, leveraging generative AI and virtual reality (VR) to evoke personal memories [4]. Integrating Open AI, DreamStudio, and Unity, our system allows patients to input descriptions, generating visual memories in a VR environment [5]. In trials, 85% of AI-generated images matched patients' expectations, although some inaccuracies arose from AI generalizations. Further validation with dementia patients is needed to assess memory recovery impacts. This novel approach modernizes Cognitive Stimulation Therapy (CST), traditionally reliant on non-visual exercises, by incorporating AI and VR to enhance memory recall and visual-spatial skills. While the world is developing more and more into Artificial Intelligence (AI) and Virtual Reality (VR), our program successfully coordinates them to help stimulate dementia patients' brains and perform the memory recall and visual spatial aspects of CST. © 2024 Copyright held by the owner/author(s).},
keywords = {AI, Cognitive rehabilitation, Cognitive stimulations, Dementia patients, Electronic health record, Firebase, Generalisation, Neurodegenerative diseases, Non visuals, Patient rehabilitation, Rehabilitation projects, Virtual environments, Virtual Reality, Virtual-reality environment, Visual memory, Visual-spatial, VR},
pubstate = {published},
tppubtype = {inproceedings}
}
Schmidt, P.; Arlt, S.; Ruiz-Gonzalez, C.; Gu, X.; Rodriguez, C.; Krenn, M.
Virtual reality for understanding artificial-intelligence-driven scientific discovery with an application in quantum optics Journal Article
In: Machine Learning: Science and Technology, vol. 5, no. 3, 2024, ISSN: 26322153 (ISSN), (Publisher: Institute of Physics).
Abstract | Links | BibTeX | Tags: 3-dimensional, Analysis process, Digital discovery, Generative adversarial networks, Generative model, generative models, Human capability, Immersive virtual reality, Intelligence models, Quantum entanglement, Quantum optics, Scientific discovery, Scientific understanding, Virtual Reality, Virtual-reality environment
@article{schmidt_virtual_2024,
title = {Virtual reality for understanding artificial-intelligence-driven scientific discovery with an application in quantum optics},
author = {P. Schmidt and S. Arlt and C. Ruiz-Gonzalez and X. Gu and C. Rodriguez and M. Krenn},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85201265211&doi=10.1088%2F2632-2153%2Fad5fdb&partnerID=40&md5=9424d1af8f11fd1da3258f8a69c882a4},
doi = {10.1088/2632-2153/ad5fdb},
issn = {26322153 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {Machine Learning: Science and Technology},
volume = {5},
number = {3},
abstract = {Generative Artificial Intelligence (AI) models can propose solutions to scientific problems beyond human capability. To truly make conceptual contributions, researchers need to be capable of understanding the AI-generated structures and extracting the underlying concepts and ideas. When algorithms provide little explanatory reasoning alongside the output, scientists have to reverse-engineer the fundamental insights behind proposals based solely on examples. This task can be challenging as the output is often highly complex and thus not immediately accessible to humans. In this work we show how transferring part of the analysis process into an immersive virtual reality (VR) environment can assist researchers in developing an understanding of AI-generated solutions. We demonstrate the usefulness of VR in finding interpretable configurations of abstract graphs, representing Quantum Optics experiments. Thereby, we can manually discover new generalizations of AI-discoveries as well as new understanding in experimental quantum optics. Furthermore, it allows us to customize the search space in an informed way—as a human-in-the-loop—to achieve significantly faster subsequent discovery iterations. As concrete examples, with this technology, we discover a new resource-efficient 3-dimensional entanglement swapping scheme, as well as a 3-dimensional 4-particle Greenberger-Horne-Zeilinger-state analyzer. Our results show the potential of VR to enhance a researcher’s ability to derive knowledge from graph-based generative AI. This type of AI is a widely used abstract data representation in various scientific fields. © 2024 Elsevier B.V., All rights reserved.},
note = {Publisher: Institute of Physics},
keywords = {3-dimensional, Analysis process, Digital discovery, Generative adversarial networks, Generative model, generative models, Human capability, Immersive virtual reality, Intelligence models, Quantum entanglement, Quantum optics, Scientific discovery, Scientific understanding, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {article}
}
2023
Yamazaki, T.; Mizumoto, T.; Yoshikawa, K.; Ohagi, M.; Kawamoto, T.; Sato, T.
An Open-Domain Avatar Chatbot by Exploiting a Large Language Model Proceedings Article
In: Stoyanchev, S.; Joty, S.; Schlangen, D.; Dusek, O.; Kennington, C.; Alikhani, M. (Ed.): pp. 428–432, Association for Computational Linguistics (ACL), 2023, ISBN: 9798891760288 (ISBN).
Abstract | Links | BibTeX | Tags: Chatbots, Computational Linguistics, Dialogue systems, Human levels, Interactive computer graphics, Language Model, Multimodal dialogue systems, Multimodal integration, Natural language processing systems, Research communities, Speech processing, Virtual Reality, Virtual-reality environment
@inproceedings{yamazaki_open-domain_2023,
title = {An Open-Domain Avatar Chatbot by Exploiting a Large Language Model},
author = {T. Yamazaki and T. Mizumoto and K. Yoshikawa and M. Ohagi and T. Kawamoto and T. Sato},
editor = {S. Stoyanchev and S. Joty and D. Schlangen and O. Dusek and C. Kennington and M. Alikhani},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105017641158&partnerID=40&md5=5a401bfaeb301f99b444debdd792272c},
isbn = {9798891760288 (ISBN)},
year = {2023},
date = {2023-01-01},
pages = {428–432},
publisher = {Association for Computational Linguistics (ACL)},
abstract = {With the ambition to create avatars capable of human-level casual conversation, we developed an open-domain avatar chatbot, situated in a virtual reality environment, that employs a large language model (LLM). Introducing the LLM posed several challenges for multimodal integration, such as developing techniques to align diverse outputs and avatar control, as well as addressing the issue of slow generation speed. To address these challenges, we integrated various external modules into our system. Our system is based on the award-winning model from the Dialogue System Live Competition 5. Through this work, we hope to stimulate discussions within the research community about the potential and challenges of multimodal dialogue systems enhanced with LLMs. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Chatbots, Computational Linguistics, Dialogue systems, Human levels, Interactive computer graphics, Language Model, Multimodal dialogue systems, Multimodal integration, Natural language processing systems, Research communities, Speech processing, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Ayre, D.; Dougherty, C.; Zhao, Y.
IMPLEMENTATION OF AN ARTIFICIAL INTELLIGENCE (AI) INSTRUCTIONAL SUPPORT SYSTEM IN A VIRTUAL REALITY (VR) THERMAL-FLUIDS LABORATORY Proceedings Article
In: ASME Int Mech Eng Congress Expos Proc, American Society of Mechanical Engineers (ASME), 2023, ISBN: 978-079188765-3 (ISBN).
Abstract | Links | BibTeX | Tags: Artificial intelligence, E-Learning, Education computing, Engineering education, Fluid mechanics, Generative AI, generative artificial intelligence, GPT, High educations, Instructional support, Laboratories, Laboratory class, Laboratory experiments, Physical laboratory, Professional aspects, Students, Support systems, Thermal fluids, Virtual Reality, Virtual-reality environment
@inproceedings{ayre_implementation_2023,
title = {IMPLEMENTATION OF AN ARTIFICIAL INTELLIGENCE (AI) INSTRUCTIONAL SUPPORT SYSTEM IN A VIRTUAL REALITY (VR) THERMAL-FLUIDS LABORATORY},
author = {D. Ayre and C. Dougherty and Y. Zhao},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85185393784&doi=10.1115%2fIMECE2023-112683&partnerID=40&md5=c2492592a016478a4b3591ff82a93be5},
doi = {10.1115/IMECE2023-112683},
isbn = {978-079188765-3 (ISBN)},
year = {2023},
date = {2023-01-01},
booktitle = {ASME Int Mech Eng Congress Expos Proc},
volume = {8},
publisher = {American Society of Mechanical Engineers (ASME)},
abstract = {Physical laboratory experiments have long been the cornerstone of higher education, providing future engineers practical real-life experience invaluable to their careers. However, demand for laboratory time has exceeded physical capabilities. Virtual reality (VR) labs have proven to retain many benefits of attending physical labs while also providing significant advantages only available in a VR environment. Previously, our group had developed a pilot VR lab that replicated six (6) unique thermal-fluids lab experiments developed using the Unity game engine. One of the VR labs was tested in a thermal-fluid mechanics laboratory class with favorable results, but students highlighted the need for additional assistance within the VR simulation. In response to this testing, we have incorporated an artificial intelligence (AI) assistant to aid students within the VR environment by developing an interaction model. Utilizing the Generative Pre-trained Transformer 4 (GPT-4) large language model (LLM) and augmented context retrieval, the AI assistant can provide reliable instruction and troubleshoot errors while students conduct the lab procedure to provide an experience similar to a real-life lab assistant. The updated VR lab was tested in two laboratory classes and while the overall tone of student response to an AI-powered assistant was excitement and enthusiasm, observations and other recorded data show that students are currently unsure of how to utilize this new technology, which will help guide future refinement of AI components within the VR environment. © 2023 by ASME.},
keywords = {Artificial intelligence, E-Learning, Education computing, Engineering education, Fluid mechanics, Generative AI, generative artificial intelligence, GPT, High educations, Instructional support, Laboratories, Laboratory class, Laboratory experiments, Physical laboratory, Professional aspects, Students, Support systems, Thermal fluids, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}