AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Li, Z.; Zhang, H.; Peng, C.; Peiris, R.
Exploring Large Language Model-Driven Agents for Environment-Aware Spatial Interactions and Conversations in Virtual Reality Role-Play Scenarios Proceedings Article
In: Proc. - IEEE Conf. Virtual Real. 3D User Interfaces, VR, pp. 1–11, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 979-833153645-9 (ISBN).
Abstract | Links | BibTeX | Tags: Chatbots, Computer simulation languages, Context- awareness, context-awareness, Digital elevation model, Generative AI, Human-AI Interaction, Language Model, Large language model, large language models, Model agents, Role-play simulation, role-play simulations, Role-plays, Spatial interaction, Virtual environments, Virtual Reality, Virtual-reality environment
@inproceedings{li_exploring_2025,
title = {Exploring Large Language Model-Driven Agents for Environment-Aware Spatial Interactions and Conversations in Virtual Reality Role-Play Scenarios},
author = {Z. Li and H. Zhang and C. Peng and R. Peiris},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105002706893&doi=10.1109%2fVR59515.2025.00025&partnerID=40&md5=60f22109e054c9035a0c2210bb797039},
doi = {10.1109/VR59515.2025.00025},
isbn = {979-833153645-9 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Conf. Virtual Real. 3D User Interfaces, VR},
pages = {1–11},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Recent research has begun adopting Large Language Model (LLM) agents to enhance Virtual Reality (VR) interactions, creating immersive chatbot experiences. However, while current studies focus on generating dialogue from user speech inputs, their abilities to generate richer experiences based on the perception of LLM agents' VR environments and interaction cues remain unexplored. Hence, in this work, we propose an approach that enables LLM agents to perceive virtual environments and generate environment-aware interactions and conversations for an embodied human-AI interaction experience in VR environments. Here, we define a schema for describing VR environments and their interactions through text prompts. We evaluate the performance of our method through five role-play scenarios created using our approach in a study with 14 participants. The findings discuss the opportunities and challenges of our proposed approach for developing environment-aware LLM agents that facilitate spatial interactions and conversations within VR role-play scenarios. © 2025 IEEE.},
keywords = {Chatbots, Computer simulation languages, Context- awareness, context-awareness, Digital elevation model, Generative AI, Human-AI Interaction, Language Model, Large language model, large language models, Model agents, Role-play simulation, role-play simulations, Role-plays, Spatial interaction, Virtual environments, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Lau, K. H. C.; Bozkir, E.; Gao, H.; Kasneci, E.
Evaluating Usability and Engagement of Large Language Models in Virtual Reality for Traditional Scottish Curling Proceedings Article
In: A., Del Bue; C., Canton; J., Pont-Tuset; T., Tommasi (Ed.): Lect. Notes Comput. Sci., pp. 177–195, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-303191571-0 (ISBN).
Abstract | Links | BibTeX | Tags: Chatbots, Cultural heritages, Digital Cultural Heritage, Digital cultural heritages, Educational robots, Engineering education, Heritage education, Historic Preservation, Language Model, Large language model, large language models, Learning outcome, Model-based OPC, Usability engineering, User Engagement, Virtual Reality, Virtual-reality environment, Virtualization
@inproceedings{lau_evaluating_2025,
title = {Evaluating Usability and Engagement of Large Language Models in Virtual Reality for Traditional Scottish Curling},
author = {K. H. C. Lau and E. Bozkir and H. Gao and E. Kasneci},
editor = {Del Bue A. and Canton C. and Pont-Tuset J. and Tommasi T.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105006905979&doi=10.1007%2f978-3-031-91572-7_11&partnerID=40&md5=8a81fb09ff54e57b9429660a8898149a},
doi = {10.1007/978-3-031-91572-7_11},
isbn = {03029743 (ISSN); 978-303191571-0 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15628 LNCS},
pages = {177–195},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {This paper explores the innovative application of Large Language Models (LLMs) in Virtual Reality (VR) environments to promote heritage education, focusing on traditional Scottish curling presented in the game “Scottish Bonspiel VR”. Our study compares the effectiveness of LLM-based chatbots with pre-defined scripted chatbots, evaluating key criteria such as usability, user engagement, and learning outcomes. The results show that LLM-based chatbots significantly improve interactivity and engagement, creating a more dynamic and immersive learning environment. This integration helps document and preserve cultural heritage and enhances dissemination processes, which are crucial for safeguarding intangible cultural heritage (ICH) amid environmental changes. Furthermore, the study highlights the potential of novel technologies in education to provide immersive experiences that foster a deeper appreciation of cultural heritage. These findings support the wider application of LLMs and VR in cultural education to address global challenges and promote sustainable practices to preserve and enhance cultural heritage. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.},
keywords = {Chatbots, Cultural heritages, Digital Cultural Heritage, Digital cultural heritages, Educational robots, Engineering education, Heritage education, Historic Preservation, Language Model, Large language model, large language models, Learning outcome, Model-based OPC, Usability engineering, User Engagement, Virtual Reality, Virtual-reality environment, Virtualization},
pubstate = {published},
tppubtype = {inproceedings}
}
Gao, H.; Xie, Y.; Kasneci, E.
PerVRML: ChatGPT-Driven Personalized VR Environments for Machine Learning Education Journal Article
In: International Journal of Human-Computer Interaction, 2025, ISSN: 10447318 (ISSN).
Abstract | Links | BibTeX | Tags: Backpropagation, ChatGPT, Curricula, Educational robots, Immersive learning, Interactive learning, Language Model, Large language model, large language models, Learning mode, Machine learning education, Machine-learning, Personalized learning, Support vector machines, Teaching, Virtual Reality, Virtual-reality environment, Virtualization
@article{gao_pervrml_2025,
title = {PerVRML: ChatGPT-Driven Personalized VR Environments for Machine Learning Education},
author = {H. Gao and Y. Xie and E. Kasneci},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005776517&doi=10.1080%2f10447318.2025.2504188&partnerID=40&md5=c2c59be3d20d02c6df7750c2330c8f6d},
doi = {10.1080/10447318.2025.2504188},
issn = {10447318 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {International Journal of Human-Computer Interaction},
abstract = {The advent of large language models (LLMs) such as ChatGPT has demonstrated significant potential for advancing educational technologies. Recently, growing interest has emerged in integrating ChatGPT with virtual reality (VR) to provide interactive and dynamic learning environments. This study explores the effectiveness of ChatGTP-driven VR in facilitating machine learning education through PerVRML. PerVRML incorporates a ChatGPT-powered avatar that provides real-time assistance and uses LLMs to personalize learning paths based on various sensor data from VR. A between-subjects design was employed to compare two learning modes: personalized and non-personalized. Quantitative data were collected from assessments, user experience surveys, and interaction metrics. The results indicate that while both learning modes supported learning effectively, ChatGPT-powered personalization significantly improved learning outcomes and had distinct impacts on user feedback. These findings underscore the potential of ChatGPT-enhanced VR to deliver adaptive and personalized educational experiences. © 2025 Taylor & Francis Group, LLC.},
keywords = {Backpropagation, ChatGPT, Curricula, Educational robots, Immersive learning, Interactive learning, Language Model, Large language model, large language models, Learning mode, Machine learning education, Machine-learning, Personalized learning, Support vector machines, Teaching, Virtual Reality, Virtual-reality environment, Virtualization},
pubstate = {published},
tppubtype = {article}
}
Kurai, R.; Hiraki, T.; Hiroi, Y.; Hirao, Y.; Perusquia-Hernandez, M.; Uchiyama, H.; Kiyokawa, K.
MagicItem: Dynamic Behavior Design of Virtual Objects With Large Language Models in a Commercial Metaverse Platform Journal Article
In: IEEE Access, vol. 13, pp. 19132–19143, 2025, ISSN: 21693536 (ISSN).
Abstract | Links | BibTeX | Tags: Behavior design, Code programming, Computer simulation languages, Dynamic behaviors, Language Model, Large-language model, Low-code programming, Metaverse platform, Metaverses, Virtual addresses, Virtual environments, Virtual objects, Virtual Reality, Virtual-reality environment
@article{kurai_magicitem_2025,
title = {MagicItem: Dynamic Behavior Design of Virtual Objects With Large Language Models in a Commercial Metaverse Platform},
author = {R. Kurai and T. Hiraki and Y. Hiroi and Y. Hirao and M. Perusquia-Hernandez and H. Uchiyama and K. Kiyokawa},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85216011970&doi=10.1109%2fACCESS.2025.3530439&partnerID=40&md5=7a33b9618af8b4ab79b43fb3bd4317cf},
doi = {10.1109/ACCESS.2025.3530439},
issn = {21693536 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Access},
volume = {13},
pages = {19132–19143},
abstract = {To create rich experiences in virtual reality (VR) environments, it is essential to define the behavior of virtual objects through programming. However, programming in 3D spaces requires a wide range of background knowledge and programming skills. Although Large Language Models (LLMs) have provided programming support, they are still primarily aimed at programmers. In metaverse platforms, where many users inhabit VR spaces, most users are unfamiliar with programming, making it difficult for them to modify the behavior of objects in the VR environment easily. Existing LLM-based script generation methods for VR spaces require multiple lengthy iterations to implement the desired behaviors and are difficult to integrate into the operation of metaverse platforms. To address this issue, we propose a tool that generates behaviors for objects in VR spaces from natural language within Cluster, a metaverse platform with a large user base. By integrating LLMs with the Cluster Script provided by this platform, we enable users with limited programming experience to define object behaviors within the platform freely. We have also integrated our tool into a commercial metaverse platform and are conducting online experiments with 63 general users of the platform. The experiments show that even users with no programming background can successfully generate behaviors for objects in VR spaces, resulting in a highly satisfying system. Our research contributes to democratizing VR content creation by enabling non-programmers to design dynamic behaviors for virtual objects in metaverse platforms. © 2013 IEEE.},
keywords = {Behavior design, Code programming, Computer simulation languages, Dynamic behaviors, Language Model, Large-language model, Low-code programming, Metaverse platform, Metaverses, Virtual addresses, Virtual environments, Virtual objects, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {article}
}
2024
White, M.; Banerjee, N. K.; Banerjee, S.
VRcabulary: A VR Environment for Reinforced Language Learning via Multi-Modular Design Proceedings Article
In: Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR, pp. 315–319, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-835037202-1 (ISBN).
Abstract | Links | BibTeX | Tags: 'current, E-Learning, Foreign language, Immersive, Instructional modules, Language learning, Modular designs, Modulars, Multi-modular, Reinforcement, Second language, Virtual Reality, Virtual-reality environment
@inproceedings{white_vrcabulary_2024,
title = {VRcabulary: A VR Environment for Reinforced Language Learning via Multi-Modular Design},
author = {M. White and N. K. Banerjee and S. Banerjee},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85187241160&doi=10.1109%2fAIxVR59861.2024.00053&partnerID=40&md5=4d8ff8ac5c6aa8336a571ba906fe0f5d},
doi = {10.1109/AIxVR59861.2024.00053},
isbn = {979-835037202-1 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR},
pages = {315–319},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {We demonstrate VRcabulary, a hierarchical modular virtual reality (VR) environment for language learning (LL). Current VR LL apps lack the benefit of reinforcement presented in typical classroom environments. Apps either introduce content in the second language and lack retention testing, or provide gamification without an in-environment instructional component. To acquire reinforcement of knowledge, the learner needs to visit the app multiple times, increasing the potential for monotony. In VRcabulary, we introduce a multi-modular hierarchical design with 3 modules - an instructional module providing AI-generated audio playbacks of object names, a practice module enabling interaction based reinforcement of object names in response to audio playback, and an exam module enabling retention testing through interaction. To incentivize engagement by reducing monotony, we keep the designs of each modules distinct. We provide sequential object presentations in the instructional module and multiple object assortments in the practice and exam modules. We provide feedback and multiple trials in the practice module, but eliminate them from the exam module. We expect cross-module diversity of interaction in VRcabulary to enhance engagement in VR LL. © 2024 IEEE.},
keywords = {'current, E-Learning, Foreign language, Immersive, Instructional modules, Language learning, Modular designs, Modulars, Multi-modular, Reinforcement, Second language, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Chheang, V.; Sharmin, S.; Marquez-Hernandez, R.; Patel, M.; Rajasekaran, D.; Caulfield, G.; Kiafar, B.; Li, J.; Kullu, P.; Barmaki, R. L.
Towards Anatomy Education with Generative AI-based Virtual Assistants in Immersive Virtual Reality Environments Proceedings Article
In: Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR, pp. 21–30, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-835037202-1 (ISBN).
Abstract | Links | BibTeX | Tags: 3-D visualization systems, Anatomy education, Anatomy educations, Cognitive complexity, E-Learning, Embodied virtual assistant, Embodied virtual assistants, Generative AI, generative artificial intelligence, Human computer interaction, human-computer interaction, Immersive virtual reality, Interactive 3d visualizations, Knowledge Management, Medical education, Three dimensional computer graphics, Verbal communications, Virtual assistants, Virtual Reality, Virtual-reality environment
@inproceedings{chheang_towards_2024,
title = {Towards Anatomy Education with Generative AI-based Virtual Assistants in Immersive Virtual Reality Environments},
author = {V. Chheang and S. Sharmin and R. Marquez-Hernandez and M. Patel and D. Rajasekaran and G. Caulfield and B. Kiafar and J. Li and P. Kullu and R. L. Barmaki},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85187216893&doi=10.1109%2fAIxVR59861.2024.00011&partnerID=40&md5=33e8744309add5fe400f4f341326505f},
doi = {10.1109/AIxVR59861.2024.00011},
isbn = {979-835037202-1 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR},
pages = {21–30},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Virtual reality (VR) and interactive 3D visualization systems have enhanced educational experiences and environments, particularly in complicated subjects such as anatomy education. VR-based systems surpass the potential limitations of traditional training approaches in facilitating interactive engagement among students. However, research on embodied virtual assistants that leverage generative artificial intelligence (AI) and verbal communication in the anatomy education context is underrepresented. In this work, we introduce a VR environment with a generative AI-embodied virtual assistant to support participants in responding to varying cognitive complexity anatomy questions and enable verbal communication. We assessed the technical efficacy and usability of the proposed environment in a pilot user study with 16 participants. We conducted a within-subject design for virtual assistant configuration (avatar- and screen-based), with two levels of cognitive complexity (knowledge- and analysis-based). The results reveal a significant difference in the scores obtained from knowledge- and analysis-based questions in relation to avatar configuration. Moreover, results provide insights into usability, cognitive task load, and the sense of presence in the proposed virtual assistant configurations. Our environment and results of the pilot study offer potential benefits and future research directions beyond medical education, using generative AI and embodied virtual agents as customized virtual conversational assistants. © 2024 IEEE.},
keywords = {3-D visualization systems, Anatomy education, Anatomy educations, Cognitive complexity, E-Learning, Embodied virtual assistant, Embodied virtual assistants, Generative AI, generative artificial intelligence, Human computer interaction, human-computer interaction, Immersive virtual reality, Interactive 3d visualizations, Knowledge Management, Medical education, Three dimensional computer graphics, Verbal communications, Virtual assistants, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Wang, Y.; Zhang, Y.
Enhancing Cognitive Recall in Dementia Patients: Integrating Generative AI with Virtual Reality for Behavioral and Memory Rehabilitation Proceedings Article
In: ACM Int. Conf. Proc. Ser., pp. 86–91, Association for Computing Machinery, 2024, ISBN: 979-840071806-9 (ISBN).
Abstract | Links | BibTeX | Tags: AI, Cognitive rehabilitation, Cognitive stimulations, Dementia patients, Electronic health record, Firebase, Generalisation, Neurodegenerative diseases, Non visuals, Patient rehabilitation, Rehabilitation projects, Virtual environments, Virtual Reality, Virtual-reality environment, Visual memory, Visual-spatial, VR
@inproceedings{wang_enhancing_2024,
title = {Enhancing Cognitive Recall in Dementia Patients: Integrating Generative AI with Virtual Reality for Behavioral and Memory Rehabilitation},
author = {Y. Wang and Y. Zhang},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205444838&doi=10.1145%2f3686540.3686552&partnerID=40&md5=1577754660fddd936254fc78586e6a17},
doi = {10.1145/3686540.3686552},
isbn = {979-840071806-9 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {ACM Int. Conf. Proc. Ser.},
pages = {86–91},
publisher = {Association for Computing Machinery},
abstract = {In this Project, we developed a cognitive rehabilitation program for dementia patients, leveraging generative AI and virtual reality (VR) to evoke personal memories [4]. Integrating Open AI, DreamStudio, and Unity, our system allows patients to input descriptions, generating visual memories in a VR environment [5]. In trials, 85% of AI-generated images matched patients' expectations, although some inaccuracies arose from AI generalizations. Further validation with dementia patients is needed to assess memory recovery impacts. This novel approach modernizes Cognitive Stimulation Therapy (CST), traditionally reliant on non-visual exercises, by incorporating AI and VR to enhance memory recall and visual-spatial skills. While the world is developing more and more into Artificial Intelligence (AI) and Virtual Reality (VR), our program successfully coordinates them to help stimulate dementia patients' brains and perform the memory recall and visual spatial aspects of CST. © 2024 Copyright held by the owner/author(s).},
keywords = {AI, Cognitive rehabilitation, Cognitive stimulations, Dementia patients, Electronic health record, Firebase, Generalisation, Neurodegenerative diseases, Non visuals, Patient rehabilitation, Rehabilitation projects, Virtual environments, Virtual Reality, Virtual-reality environment, Visual memory, Visual-spatial, VR},
pubstate = {published},
tppubtype = {inproceedings}
}
Liu, P.; Kitson, A.; Picard-Deland, C.; Carr, M.; Liu, S.; Lc, R.; Zhu-Tian, C.
Virtual Dream Reliving: Exploring Generative AI in Immersive Environment for Dream Re-experiencing Proceedings Article
In: Conf Hum Fact Comput Syst Proc, Association for Computing Machinery, 2024, ISBN: 979-840070331-7 (ISBN).
Abstract | Links | BibTeX | Tags: Dream Re-experiencing, Dreamwork Engineering, Fundamental component, Generative AI, Immersive environment, Interactivity, Key elements, Personal Insight, Scientific Creativity, Virtual Reality, Virtual-reality environment
@inproceedings{liu_virtual_2024,
title = {Virtual Dream Reliving: Exploring Generative AI in Immersive Environment for Dream Re-experiencing},
author = {P. Liu and A. Kitson and C. Picard-Deland and M. Carr and S. Liu and R. Lc and C. Zhu-Tian},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85194182384&doi=10.1145%2f3613905.3644054&partnerID=40&md5=c6de3def82e91544f194a0fd465a636e},
doi = {10.1145/3613905.3644054},
isbn = {979-840070331-7 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Conf Hum Fact Comput Syst Proc},
publisher = {Association for Computing Machinery},
abstract = {Dreaming is a fundamental component of the human experience. Modern-day psychologists and neuroscientists use "dreamwork" to describe a variety of strategies that deepen and engage with dreams. Re-experiencing the dream as if reliving the memory, feelings, and bodily sensations from the dream is a key element shared by many dreamwork practices. In this paper, we propose the concept of "dreamwork engineering" by creating a system enabling dream re-experiencing in a virtual reality environment through generative AI. Through an autoethnographic study, the first author documented his own dreams and relived his dream experiences for two weeks. Based on our results, we propose a technologyaided dreamwork framework, where technology could potentially augment traditional dreamwork methods through spatiality and movement, interactivity and abstract anchor. We further highlight the collaborative role of technology in dreamwork and advocate that the scientific community could also benefit from dreaming and dreamwork for scientific creativity. © 2024 Association for Computing Machinery. All rights reserved.},
keywords = {Dream Re-experiencing, Dreamwork Engineering, Fundamental component, Generative AI, Immersive environment, Interactivity, Key elements, Personal Insight, Scientific Creativity, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Schmidt, P.; Arlt, S.; Ruiz-Gonzalez, C.; Gu, X.; Rodríguez, C.; Krenn, M.
Virtual reality for understanding artificial-intelligence-driven scientific discovery with an application in quantum optics Journal Article
In: Machine Learning: Science and Technology, vol. 5, no. 3, 2024, ISSN: 26322153 (ISSN).
Abstract | Links | BibTeX | Tags: 3-dimensional, Analysis process, Digital discovery, Generative adversarial networks, Generative model, generative models, Human capability, Immersive virtual reality, Intelligence models, Quantum entanglement, Quantum optics, Scientific discovery, Scientific understanding, Virtual Reality, Virtual-reality environment
@article{schmidt_virtual_2024,
title = {Virtual reality for understanding artificial-intelligence-driven scientific discovery with an application in quantum optics},
author = {P. Schmidt and S. Arlt and C. Ruiz-Gonzalez and X. Gu and C. Rodríguez and M. Krenn},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85201265211&doi=10.1088%2f2632-2153%2fad5fdb&partnerID=40&md5=3a6af280ba0ac81507ade10f5dd1efb3},
doi = {10.1088/2632-2153/ad5fdb},
issn = {26322153 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {Machine Learning: Science and Technology},
volume = {5},
number = {3},
abstract = {Generative Artificial Intelligence (AI) models can propose solutions to scientific problems beyond human capability. To truly make conceptual contributions, researchers need to be capable of understanding the AI-generated structures and extracting the underlying concepts and ideas. When algorithms provide little explanatory reasoning alongside the output, scientists have to reverse-engineer the fundamental insights behind proposals based solely on examples. This task can be challenging as the output is often highly complex and thus not immediately accessible to humans. In this work we show how transferring part of the analysis process into an immersive virtual reality (VR) environment can assist researchers in developing an understanding of AI-generated solutions. We demonstrate the usefulness of VR in finding interpretable configurations of abstract graphs, representing Quantum Optics experiments. Thereby, we can manually discover new generalizations of AI-discoveries as well as new understanding in experimental quantum optics. Furthermore, it allows us to customize the search space in an informed way—as a human-in-the-loop—to achieve significantly faster subsequent discovery iterations. As concrete examples, with this technology, we discover a new resource-efficient 3-dimensional entanglement swapping scheme, as well as a 3-dimensional 4-particle Greenberger-Horne-Zeilinger-state analyzer. Our results show the potential of VR to enhance a researcher’s ability to derive knowledge from graph-based generative AI. This type of AI is a widely used abstract data representation in various scientific fields. © 2024 The Author(s). Published by IOP Publishing Ltd.},
keywords = {3-dimensional, Analysis process, Digital discovery, Generative adversarial networks, Generative model, generative models, Human capability, Immersive virtual reality, Intelligence models, Quantum entanglement, Quantum optics, Scientific discovery, Scientific understanding, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {article}
}
2023
Ayre, D.; Dougherty, C.; Zhao, Y.
IMPLEMENTATION OF AN ARTIFICIAL INTELLIGENCE (AI) INSTRUCTIONAL SUPPORT SYSTEM IN A VIRTUAL REALITY (VR) THERMAL-FLUIDS LABORATORY Proceedings Article
In: ASME Int Mech Eng Congress Expos Proc, American Society of Mechanical Engineers (ASME), 2023, ISBN: 978-079188765-3 (ISBN).
Abstract | Links | BibTeX | Tags: Artificial intelligence, E-Learning, Education computing, Engineering education, Fluid mechanics, Generative AI, generative artificial intelligence, GPT, High educations, Instructional support, Laboratories, Laboratory class, Laboratory experiments, Physical laboratory, Professional aspects, Students, Support systems, Thermal fluids, Virtual Reality, Virtual-reality environment
@inproceedings{ayre_implementation_2023,
title = {IMPLEMENTATION OF AN ARTIFICIAL INTELLIGENCE (AI) INSTRUCTIONAL SUPPORT SYSTEM IN A VIRTUAL REALITY (VR) THERMAL-FLUIDS LABORATORY},
author = {D. Ayre and C. Dougherty and Y. Zhao},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85185393784&doi=10.1115%2fIMECE2023-112683&partnerID=40&md5=c2492592a016478a4b3591ff82a93be5},
doi = {10.1115/IMECE2023-112683},
isbn = {978-079188765-3 (ISBN)},
year = {2023},
date = {2023-01-01},
booktitle = {ASME Int Mech Eng Congress Expos Proc},
volume = {8},
publisher = {American Society of Mechanical Engineers (ASME)},
abstract = {Physical laboratory experiments have long been the cornerstone of higher education, providing future engineers practical real-life experience invaluable to their careers. However, demand for laboratory time has exceeded physical capabilities. Virtual reality (VR) labs have proven to retain many benefits of attending physical labs while also providing significant advantages only available in a VR environment. Previously, our group had developed a pilot VR lab that replicated six (6) unique thermal-fluids lab experiments developed using the Unity game engine. One of the VR labs was tested in a thermal-fluid mechanics laboratory class with favorable results, but students highlighted the need for additional assistance within the VR simulation. In response to this testing, we have incorporated an artificial intelligence (AI) assistant to aid students within the VR environment by developing an interaction model. Utilizing the Generative Pre-trained Transformer 4 (GPT-4) large language model (LLM) and augmented context retrieval, the AI assistant can provide reliable instruction and troubleshoot errors while students conduct the lab procedure to provide an experience similar to a real-life lab assistant. The updated VR lab was tested in two laboratory classes and while the overall tone of student response to an AI-powered assistant was excitement and enthusiasm, observations and other recorded data show that students are currently unsure of how to utilize this new technology, which will help guide future refinement of AI components within the VR environment. © 2023 by ASME.},
keywords = {Artificial intelligence, E-Learning, Education computing, Engineering education, Fluid mechanics, Generative AI, generative artificial intelligence, GPT, High educations, Instructional support, Laboratories, Laboratory class, Laboratory experiments, Physical laboratory, Professional aspects, Students, Support systems, Thermal fluids, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}