AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Tortora, A.; Amaro, I.; Greca, A. Della; Barra, P.
Exploring the Role of Generative Artificial Intelligence in Virtual Reality: Opportunities and Future Perspectives Proceedings Article
In: J.Y.C., Chen; G., Fragomeni (Ed.): Lect. Notes Comput. Sci., pp. 125–142, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-303193699-9 (ISBN).
Abstract | Links | BibTeX | Tags: Ethical technology, Future perspectives, Generative AI, Image modeling, Immersive, immersive experience, Immersive Experiences, Information Management, Language Model, Personnel training, Professional training, Real- time, Sensitive data, Training design, Users' experiences, Virtual Reality
@inproceedings{tortora_exploring_2025,
title = {Exploring the Role of Generative Artificial Intelligence in Virtual Reality: Opportunities and Future Perspectives},
author = {A. Tortora and I. Amaro and A. Della Greca and P. Barra},
editor = {Chen J.Y.C. and Fragomeni G.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007788684&doi=10.1007%2f978-3-031-93700-2_9&partnerID=40&md5=7b69183bbf8172f9595f939254fb6831},
doi = {10.1007/978-3-031-93700-2_9},
isbn = {03029743 (ISSN); 978-303193699-9 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15788 LNCS},
pages = {125–142},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {In recent years, generative AI, such as language and image models, have started to revolutionize virtual reality (VR) by offering new opportunities for immersive and personalized interaction. This paper explores the potential of these Intelligent Augmentation technologies in the context of VR, analyzing how the generation of text and images in real time can enhance the user experience through dynamic and personalized environments and contents. The integration of generative AI in VR scenarios holds promise in multiple fields, including education, professional training, design, and healthcare. However, their implementation involves significant challenges, such as privacy management, data security, and ethical issues related to cognitive manipulation and representation of reality. Through an overview of current applications and future prospects, this paper highlights the crucial role of generative AI in enhancing VR, helping to outline a path for the ethical and sustainable development of these immersive technologies. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.},
keywords = {Ethical technology, Future perspectives, Generative AI, Image modeling, Immersive, immersive experience, Immersive Experiences, Information Management, Language Model, Personnel training, Professional training, Real- time, Sensitive data, Training design, Users' experiences, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Li, Y.; Pang, E. C. H.; Ng, C. S. Y.; Azim, M.; Leung, H.
Enhancing Linear Algebra Education with AI-Generated Content in the CityU Metaverse: A Comparative Study Proceedings Article
In: T., Hao; J.G., Wu; X., Luo; Y., Sun; Y., Mu; S., Ge; W., Xie (Ed.): Lect. Notes Comput. Sci., pp. 3–16, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-981964406-3 (ISBN).
Abstract | Links | BibTeX | Tags: Comparatives studies, Digital age, Digital interactions, digital twin, Educational metaverse, Engineering education, Generative AI, Immersive, Matrix algebra, Metaverse, Metaverses, Personnel training, Students, Teaching, University campus, Virtual environments, virtual learning environment, Virtual learning environments, Virtual Reality, Virtualization
@inproceedings{li_enhancing_2025,
title = {Enhancing Linear Algebra Education with AI-Generated Content in the CityU Metaverse: A Comparative Study},
author = {Y. Li and E. C. H. Pang and C. S. Y. Ng and M. Azim and H. Leung},
editor = {Hao T. and Wu J.G. and Luo X. and Sun Y. and Mu Y. and Ge S. and Xie W.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105003632691&doi=10.1007%2f978-981-96-4407-0_1&partnerID=40&md5=c067ba5d4c15e9c0353bf315680531fc},
doi = {10.1007/978-981-96-4407-0_1},
isbn = {03029743 (ISSN); 978-981964406-3 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15589 LNCS},
pages = {3–16},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {In today’s digital age, the metaverse is emerging as the forthcoming evolution of the internet. It provides an immersive space that marks a new frontier in the way digital interactions are facilitated and experienced. In this paper, we present the CityU Metaverse, which aims to construct a digital twin of our university campus. It is designed as an educational virtual world where learning applications can be embedded in this virtual campus, supporting not only remote and collaborative learning but also professional technical training to enhance educational experiences through immersive and interactive learning. To evaluate the effectiveness of this educational metaverse, we conducted an experiment focused on 3D linear transformation in linear algebra, with teaching content generated by generative AI, comparing our metaverse system with traditional teaching methods. Knowledge tests and surveys assessing learning interest revealed that students engaged with the CityU Metaverse, facilitated by AI-generated content, outperformed those in traditional settings and reported greater enjoyment during the learning process. The work provides valuable perspectives on the behaviors and interactions within the metaverse by analyzing user preferences and learning outcomes. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.},
keywords = {Comparatives studies, Digital age, Digital interactions, digital twin, Educational metaverse, Engineering education, Generative AI, Immersive, Matrix algebra, Metaverse, Metaverses, Personnel training, Students, Teaching, University campus, Virtual environments, virtual learning environment, Virtual learning environments, Virtual Reality, Virtualization},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhu, X. T.; Cheerman, H.; Cheng, M.; Kiami, S. R.; Chukoskie, L.; McGivney, E.
Designing VR Simulation System for Clinical Communication Training with LLMs-Based Embodied Conversational Agents Proceedings Article
In: Conf Hum Fact Comput Syst Proc, Association for Computing Machinery, 2025, ISBN: 979-840071395-8 (ISBN).
Abstract | Links | BibTeX | Tags: Clinical communications, Clinical Simulation, Communications training, Curricula, Embodied conversational agent, Embodied Conversational Agents, Health professions, Intelligent virtual agents, Language Model, Medical education, Model-based OPC, Patient simulators, Personnel training, Students, Teaching, User centered design, Virtual environments, Virtual Reality, VR simulation, VR simulation systems
@inproceedings{zhu_designing_2025,
title = {Designing VR Simulation System for Clinical Communication Training with LLMs-Based Embodied Conversational Agents},
author = {X. T. Zhu and H. Cheerman and M. Cheng and S. R. Kiami and L. Chukoskie and E. McGivney},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005754066&doi=10.1145%2f3706599.3719693&partnerID=40&md5=4468fbd54b43d6779259300afd08632e},
doi = {10.1145/3706599.3719693},
isbn = {979-840071395-8 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Conf Hum Fact Comput Syst Proc},
publisher = {Association for Computing Machinery},
abstract = {VR simulation in Health Professions (HP) education demonstrates huge potential, but fixed learning content with little customization limits its application beyond lab environments. To address these limitations in the context of VR for patient communication training, we conducted a user-centered study involving semi-structured interviews with advanced HP students to understand their challenges in clinical communication training and perceptions of VR-based solutions. From this, we derived design insights emphasizing the importance of realistic scenarios, simple interactions, and unpredictable dialogues. Building on these insights, we developed the Virtual AI Patient Simulator (VAPS), a novel VR system powered by Large Language Models (LLMs) and Embodied Conversational Agents (ECAs), supporting dynamic and customizable patient interactions for immersive learning. We also provided an example of how clinical professors could use user-friendly design forms to create personalized scenarios that align with course objectives in VAPS and discuss future implications of integrating AI-driven technologies into VR education. © 2025 Copyright held by the owner/author(s).},
keywords = {Clinical communications, Clinical Simulation, Communications training, Curricula, Embodied conversational agent, Embodied Conversational Agents, Health professions, Intelligent virtual agents, Language Model, Medical education, Model-based OPC, Patient simulators, Personnel training, Students, Teaching, User centered design, Virtual environments, Virtual Reality, VR simulation, VR simulation systems},
pubstate = {published},
tppubtype = {inproceedings}
}
Volkova, S.; Nguyen, D.; Penafiel, L.; Kao, H. -T.; Cohen, M.; Engberson, G.; Cassani, L.; Almutairi, M.; Chiang, C.; Banerjee, N.; Belcher, M.; Ford, T. W.; Yankoski, M. G.; Weninger, T.; Gomez-Zara, D.; Rebensky, S.
VirTLab: Augmented Intelligence for Modeling and Evaluating Human-AI Teaming Through Agent Interactions Proceedings Article
In: R.A., Sottilare; J., Schwarz (Ed.): Lect. Notes Comput. Sci., pp. 279–301, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-303192969-4 (ISBN).
Abstract | Links | BibTeX | Tags: Agent based simulation, agent-based simulation, Augmented Reality, Causal analysis, HAT processes and states, Human digital twin, human digital twins, Human-AI team process and state, Human-AI teaming, Intelligent virtual agents, Operational readiness, Personnel training, Team performance, Team process, Virtual teaming, Visual analytics
@inproceedings{volkova_virtlab_2025,
title = {VirTLab: Augmented Intelligence for Modeling and Evaluating Human-AI Teaming Through Agent Interactions},
author = {S. Volkova and D. Nguyen and L. Penafiel and H. -T. Kao and M. Cohen and G. Engberson and L. Cassani and M. Almutairi and C. Chiang and N. Banerjee and M. Belcher and T. W. Ford and M. G. Yankoski and T. Weninger and D. Gomez-Zara and S. Rebensky},
editor = {Sottilare R.A. and Schwarz J.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007830752&doi=10.1007%2f978-3-031-92970-0_20&partnerID=40&md5=c578dc95176a617f6de2a1c6f998f73f},
doi = {10.1007/978-3-031-92970-0_20},
isbn = {03029743 (ISSN); 978-303192969-4 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15813 LNCS},
pages = {279–301},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {This paper introduces VirTLab (Virtual Teaming Laboratory), a novel augmented intelligence platform designed to simulate and analyze interactions between human-AI teams (HATs) through the use of human digital twins (HDTs) and AI agents. VirTLab enhances operational readiness by systematically analyzing HAT dynamics, fostering trust development, and providing actionable recommendations to improve team performance outcomes. VirTLab combines agents driven by large language models (LLM) interacting in a simulated environment with integrated HAT performance measures obtained using interactive visual analytics. VirTLab integrates four key components: (1) HDTs with configurable profiles, (2) operational AI teammates, (3) a simulation engine that enforces temporal and spatial environment constraints, ensures situational awareness, and coordinates events between HDT and AI agents to deliver high-fidelity simulations, and (4) an evaluation platform that validates simulations against ground truth and enables exploration of how HDTs and AI attributes influence HAT functioning. We demonstrate VirTLab’s capabilities through focused experiments examining how variations in HDT openness, agreeableness, propensity to trust, and AI reliability and transparency influence HAT performance. Our HAT performance evaluation framework incorporates both objective measures such as communication patterns and mission completion, and subjective measures to include perceived trust and team coordination. Results on search and rescue missions reveal that AI teammate reliability significantly impacts communication dynamics and team assistance behaviors, whereas HDT personality traits influence trust development and team coordination -insights that directly inform the design of HAT training programs. VirTLab enables instructional designers to explore interventions in HAT behaviors through controlled experiments and causal analysis, leading to improved HAT performance. Visual analytics support the examination of HAT functioning across different conditions, allowing for real-time assessment and adaptation of scenarios. VirTLab contributes to operational readiness by preparing human operators to work seamlessly with AI counterparts in real-world situations. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.},
keywords = {Agent based simulation, agent-based simulation, Augmented Reality, Causal analysis, HAT processes and states, Human digital twin, human digital twins, Human-AI team process and state, Human-AI teaming, Intelligent virtual agents, Operational readiness, Personnel training, Team performance, Team process, Virtual teaming, Visual analytics},
pubstate = {published},
tppubtype = {inproceedings}
}
Sabir, A.; Hussain, R.; Pedro, A.; Park, C.
Personalized construction safety training system using conversational AI in virtual reality Journal Article
In: Automation in Construction, vol. 175, 2025, ISSN: 09265805 (ISSN).
Abstract | Links | BibTeX | Tags: Construction safety, Construction safety training, Conversational AI, Digital elevation model, Helmet mounted displays, Language Model, Large language model, large language models, Personalized safety training, Personnel training, Safety training, Training Systems, Virtual environments, Virtual Reality, Workers'
@article{sabir_personalized_2025,
title = {Personalized construction safety training system using conversational AI in virtual reality},
author = {A. Sabir and R. Hussain and A. Pedro and C. Park},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105002741042&doi=10.1016%2fj.autcon.2025.106207&partnerID=40&md5=376284339bf10fd5d799cc56c6643d36},
doi = {10.1016/j.autcon.2025.106207},
issn = {09265805 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {Automation in Construction},
volume = {175},
abstract = {Training workers in safety protocols is crucial for mitigating job site hazards, yet traditional methods often fall short. This paper explores integrating virtual reality (VR) and large language models (LLMs) into iSafeTrainer, an AI-powered safety training system. The system allows trainees to engage with trade-specific content tailored to their expertise level in a third-person perspective in a non-immersive desktop virtual environment, eliminating the need for head-mounted displays. An experimental study evaluated the system through qualitative, survey-based assessments, focusing on user satisfaction, experience, engagement, guidance, and confidence. Results showed high satisfaction rates (>85 %) among novice users, with improved safety knowledge. Expert users suggested advanced scenarios, highlighting the system's potential for expansion. The modular architecture supports customization across various construction settings, ensuring adaptability for future improvements. © 2024},
keywords = {Construction safety, Construction safety training, Conversational AI, Digital elevation model, Helmet mounted displays, Language Model, Large language model, large language models, Personalized safety training, Personnel training, Safety training, Training Systems, Virtual environments, Virtual Reality, Workers'},
pubstate = {published},
tppubtype = {article}
}
Buldu, K. B.; Özdel, S.; Lau, K. H. Carrie; Wang, M.; Saad, D.; Schönborn, S.; Boch, A.; Kasneci, E.; Bozkir, E.
CUIfy the XR: An Open-Source Package to Embed LLM-Powered Conversational Agents in XR Proceedings Article
In: Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR, pp. 192–197, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 979-833152157-8 (ISBN).
Abstract | Links | BibTeX | Tags: Augmented Reality, Computational Linguistics, Conversational user interface, conversational user interfaces, Extended reality, Head-mounted-displays, Helmet mounted displays, Language Model, Large language model, large language models, Non-player character, non-player characters, Open source software, Personnel training, Problem oriented languages, Speech models, Speech-based interaction, Text to speech, Unity, Virtual environments, Virtual Reality
@inproceedings{buldu_cuify_2025,
title = {CUIfy the XR: An Open-Source Package to Embed LLM-Powered Conversational Agents in XR},
author = {K. B. Buldu and S. Özdel and K. H. Carrie Lau and M. Wang and D. Saad and S. Schönborn and A. Boch and E. Kasneci and E. Bozkir},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105000229165&doi=10.1109%2fAIxVR63409.2025.00037&partnerID=40&md5=837b0e3425d2e5a9358bbe6c8ecb5754},
doi = {10.1109/AIxVR63409.2025.00037},
isbn = {979-833152157-8 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR},
pages = {192–197},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Recent developments in computer graphics, machine learning, and sensor technologies enable numerous opportunities for extended reality (XR) setups for everyday life, from skills training to entertainment. With large corporations offering affordable consumer-grade head-mounted displays (HMDs), XR will likely become pervasive, and HMDs will develop as personal devices like smartphones and tablets. However, having intelligent spaces and naturalistic interactions in XR is as important as tech-nological advances so that users grow their engagement in virtual and augmented spaces. To this end, large language model (LLM)-powered non-player characters (NPCs) with speech-to-text (STT) and text-to-speech (TTS) models bring significant advantages over conventional or pre-scripted NPCs for facilitating more natural conversational user interfaces (CUIs) in XR. This paper provides the community with an open-source, customizable, extendable, and privacy-aware Unity package, CUIfy, that facili-tates speech-based NPC-user interaction with widely used LLMs, STT, and TTS models. Our package also supports multiple LLM-powered NPCs per environment and minimizes latency between different computational models through streaming to achieve us-able interactions between users and NPCs. We publish our source code in the following repository: https://gitlab.lrz.de/hctl/cuify © 2025 IEEE.},
keywords = {Augmented Reality, Computational Linguistics, Conversational user interface, conversational user interfaces, Extended reality, Head-mounted-displays, Helmet mounted displays, Language Model, Large language model, large language models, Non-player character, non-player characters, Open source software, Personnel training, Problem oriented languages, Speech models, Speech-based interaction, Text to speech, Unity, Virtual environments, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Nygren, T.; Samuelsson, M.; Hansson, P. -O.; Efimova, E.; Bachelder, S.
In: International Journal of Artificial Intelligence in Education, 2025, ISSN: 15604292 (ISSN).
Abstract | Links | BibTeX | Tags: AI-generated feedback, Controversial issue in social study education, Controversial issues in social studies education, Curricula, Domain knowledge, Economic and social effects, Expert systems, Generative AI, Human engineering, Knowledge engineering, Language Model, Large language model, large language models (LLMs), Mixed reality, Mixed reality simulation, Mixed reality simulation (MRS), Pedagogical content knowledge, Pedagogical content knowledge (PCK), Personnel training, Preservice teachers, Social studies education, Teacher training, Teacher training simulation, Teacher training simulations, Teaching, Training simulation
@article{nygren_ai_2025,
title = {AI Versus Human Feedback in Mixed Reality Simulations: Comparing LLM and Expert Mentoring in Preservice Teacher Education on Controversial Issues},
author = {T. Nygren and M. Samuelsson and P. -O. Hansson and E. Efimova and S. Bachelder},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007244772&doi=10.1007%2fs40593-025-00484-8&partnerID=40&md5=d3cb14a8117045505cbbeb174b32b88d},
doi = {10.1007/s40593-025-00484-8},
issn = {15604292 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {International Journal of Artificial Intelligence in Education},
abstract = {This study explores the potential role of AI-generated mentoring within simulated environments designed for teacher education, specifically focused on the challenges of teaching controversial issues. Using a mixed-methods approach, we empirically investigate the potential and challenges of AI-generated feedback compared to that provided by human experts when mentoring preservice teachers in the context of mixed reality simulations. Findings reveal that human experts offered more mixed and nuanced feedback than ChatGPT-4o and Perplexity, especially when identifying missed teaching opportunities and balancing classroom discussions. The AI models evaluated were publicly available pro versions of LLMs and were tested using detailed prompts and coding schemes aligned with educational theories. AI systems were not very good at identifying aspects of general, pedagogical or content knowledge based on Shulman’s theories but were still quite effective in generating feedback in line with human experts. The study highlights the promise of AI to enhance teacher training but underscores the importance of combining AI feedback with expert insights to address the complexities of real-world teaching. This research contributes to a growing understanding of AI's potential role and limitations in education. It suggests that, while AI can be valuable to scale mixed reality simulations, it should be carefully evaluated and balanced by human expertise in teacher education. © The Author(s) 2025.},
keywords = {AI-generated feedback, Controversial issue in social study education, Controversial issues in social studies education, Curricula, Domain knowledge, Economic and social effects, Expert systems, Generative AI, Human engineering, Knowledge engineering, Language Model, Large language model, large language models (LLMs), Mixed reality, Mixed reality simulation, Mixed reality simulation (MRS), Pedagogical content knowledge, Pedagogical content knowledge (PCK), Personnel training, Preservice teachers, Social studies education, Teacher training, Teacher training simulation, Teacher training simulations, Teaching, Training simulation},
pubstate = {published},
tppubtype = {article}
}
2024
Harinee, S.; Raja, R. Vimal; Mugila, E.; Govindharaj, I.; Sanjaykumar, V.; Ragavendhiran, T.
Elevating Medical Training: A Synergistic Fusion of AI and VR for Immersive Anatomy Learning and Practical Procedure Mastery Proceedings Article
In: Int. Conf. Syst., Comput., Autom. Netw., ICSCAN, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-833151002-2 (ISBN).
Abstract | Links | BibTeX | Tags: 'current, Anatomy education, Anatomy educations, Computer interaction, Curricula, Embodied virtual assistant, Embodied virtual assistants, Generative AI, Human- Computer Interaction, Immersive, Intelligent virtual agents, Medical computing, Medical education, Medical procedure practice, Medical procedures, Medical training, Personnel training, Students, Teaching, Three dimensional computer graphics, Usability engineering, Virtual assistants, Virtual environments, Virtual Reality, Visualization
@inproceedings{harinee_elevating_2024,
title = {Elevating Medical Training: A Synergistic Fusion of AI and VR for Immersive Anatomy Learning and Practical Procedure Mastery},
author = {S. Harinee and R. Vimal Raja and E. Mugila and I. Govindharaj and V. Sanjaykumar and T. Ragavendhiran},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105000334626&doi=10.1109%2fICSCAN62807.2024.10894451&partnerID=40&md5=100899b489c00335e0a652f2efd33e23},
doi = {10.1109/ICSCAN62807.2024.10894451},
isbn = {979-833151002-2 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Int. Conf. Syst., Comput., Autom. Netw., ICSCAN},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Virtual reality with its 3D visualization have brought an overwhelming change in the face of medical education, especially for courses like human anatomy. The proposed virtual reality system to bring massive improvements in the education received by a medical student studying for their degree courses. The project puts forward the text-to-speech and speech-to-text aligned system that simplifies the usage of a chatbot empowered by OpenAI GPT-4 and allows pupils to vocally speak with Avatar, the set virtual assistant. Contrary to the current methodologies, the setup of virtual reality is powered by avatars and thus covers an enhanced virtual assistant environment. Avatars offer students the set of repeated practicing of medical procedures on it, and the real uniqueness in the proposed product. The developed virtual reality environment is enhanced over other current training techniques where a student should interact and immerse in three-dimensional human organs for visualization in three dimensions and hence get better knowledge of the subjects in greater depth. A virtual assistant guides the whole process, giving insights and support to help the student bridge the gap from theory to practice. Then, the system is essentially Knowledge based and Analysis based approach. The combination of generative AI along with embodied virtual agents has great potential when it comes to customized virtual conversation assistant for much wider range of applications. The study brings out the value of acquiring hands-on skills through simulated medical procedures and opens new frontiers of research and development in AI, VR, and medical education. In addition to assessing the effectiveness of such novel functionalities, the study also explores user experience related dimensions such as usability, task loading, and the sense of presence in proposed virtual medical environment. © 2024 IEEE.},
keywords = {'current, Anatomy education, Anatomy educations, Computer interaction, Curricula, Embodied virtual assistant, Embodied virtual assistants, Generative AI, Human- Computer Interaction, Immersive, Intelligent virtual agents, Medical computing, Medical education, Medical procedure practice, Medical procedures, Medical training, Personnel training, Students, Teaching, Three dimensional computer graphics, Usability engineering, Virtual assistants, Virtual environments, Virtual Reality, Visualization},
pubstate = {published},
tppubtype = {inproceedings}
}
Min, Y.; Jeong, J. -W.
Public Speaking Q&A Practice with LLM-Generated Personas in Virtual Reality Proceedings Article
In: U., Eck; M., Sra; J., Stefanucci; M., Sugimoto; M., Tatzgern; I., Williams (Ed.): Proc. - IEEE Int. Symp. Mixed Augment. Real. Adjunct, ISMAR-Adjunct, pp. 493–496, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-833150691-9 (ISBN).
Abstract | Links | BibTeX | Tags: Digital elevation model, Economic and social effects, Language Model, Large language model-based persona generation, LLM-based Persona Generation, Model-based OPC, Personnel training, Power, Practice systems, Presentation Anxiety, Public speaking, Q&A practice, user experience, Users' experiences, Virtual environments, Virtual Reality, VR training
@inproceedings{min_public_2024,
title = {Public Speaking Q&A Practice with LLM-Generated Personas in Virtual Reality},
author = {Y. Min and J. -W. Jeong},
editor = {Eck U. and Sra M. and Stefanucci J. and Sugimoto M. and Tatzgern M. and Williams I.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85214393734&doi=10.1109%2fISMAR-Adjunct64951.2024.00143&partnerID=40&md5=992d9599bde26f9d57d549639869d124},
doi = {10.1109/ISMAR-Adjunct64951.2024.00143},
isbn = {979-833150691-9 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. - IEEE Int. Symp. Mixed Augment. Real. Adjunct, ISMAR-Adjunct},
pages = {493–496},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {This paper introduces a novel VR-based Q&A practice system that harnesses the power of Large Language Models (LLMs). We support Q&A practice for upcoming public speaking by providing an immersive VR training environment populated with LLM-generated audiences, each capable of posing diverse and realistic questions based on different personas. We conducted a pilot user study involving 20 participants who engaged in VR-based Q&A practice sessions. The sessions featured a variety of questions regarding presentation material provided by the participants, all of which were generated by LLM-based personas. Through post-surveys and interviews, we evaluated the effectiveness of the proposed method. The participants valued the system for engagement and focus while also identifying several areas for improvement. Our study demonstrated the potential of integrating VR and LLMs to create a powerful, immersive tool for Q&A practice. © 2024 IEEE.},
keywords = {Digital elevation model, Economic and social effects, Language Model, Large language model-based persona generation, LLM-based Persona Generation, Model-based OPC, Personnel training, Power, Practice systems, Presentation Anxiety, Public speaking, Q&A practice, user experience, Users' experiences, Virtual environments, Virtual Reality, VR training},
pubstate = {published},
tppubtype = {inproceedings}
}
Bandara, E.; Foytik, P.; Shetty, S.; Hassanzadeh, A.
Generative-AI(with Custom-Trained Meta's Llama2 LLM), Blockchain, NFT, Federated Learning and PBOM Enabled Data Security Architecture for Metaverse on 5G/6G Environment Proceedings Article
In: Proc. - IEEE Int. Conf. Mob. Ad-Hoc Smart Syst., MASS, pp. 118–124, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-835036399-9 (ISBN).
Abstract | Links | BibTeX | Tags: 5G, 6G, Adversarial machine learning, Bill of materials, Block-chain, Blockchain, Curricula, Data privacy, Distance education, Federated learning, Generative adversarial networks, Generative-AI, Hardware security, Llama2, LLM, Medium access control, Metaverse, Metaverses, Network Security, Nft, Non-fungible token, Personnel training, Problem oriented languages, Reference architecture, Steganography
@inproceedings{bandara_generative-aicustom-trained_2024,
title = {Generative-AI(with Custom-Trained Meta's Llama2 LLM), Blockchain, NFT, Federated Learning and PBOM Enabled Data Security Architecture for Metaverse on 5G/6G Environment},
author = {E. Bandara and P. Foytik and S. Shetty and A. Hassanzadeh},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85210243120&doi=10.1109%2fMASS62177.2024.00026&partnerID=40&md5=70d21ac1e9c7b886da14825376919cac},
doi = {10.1109/MASS62177.2024.00026},
isbn = {979-835036399-9 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. - IEEE Int. Conf. Mob. Ad-Hoc Smart Syst., MASS},
pages = {118–124},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {The Metaverse is an integrated network of 3D virtual worlds accessible through a virtual reality headset. Its impact on data privacy and security is increasingly recognized as a major concern. There is a growing interest in developing a reference architecture that describes the four core aspects of its data: acquisition, storage, sharing, and interoperability. Establishing a secure data architecture is imperative to manage users' personal data and facilitate trusted AR/VR and AI/ML solutions within the Metaverse. This paper details a reference architecture empowered by Generative-AI, Blockchain, Federated Learning, and Non-Fungible Tokens (NFTs). Within this archi-tecture, various resource providers collaborate via the blockchain network. Handling personal user data and resource provider identities is executed through a Self-Sovereign Identity-enabled privacy-preserving framework. AR/NR devices in the Metaverse are represented as NFT tokens available for user purchase. Software updates and supply-chain verification for these devices are managed using a Software Bill of Materials (SBOM) and a Pipeline Bill of Materials (PBOM) verification system. Moreover, a custom-trained Llama2 LLM from Meta has been integrated to generate PBOMs for AR/NR devices' software updates, thereby preventing malware intrusions and data breaches. This Llama2-13B LLM has been quantized and fine-tuned using Qlora to ensure optimal performance on consumer-grade hardware. The provenance of AI/ML models used in the Metaverse is encapsu-lated as Model Card objects, allowing external parties to audit and verify them, thus mitigating adversarial learning attacks within these models. To the best of our knowledge, this is the very first research effort aimed at standardizing PBOM schemas and integrating Language Model algorithms for the generation of PBOMs. Additionally, a proposed mechanism facilitates different AI/ML providers in training their machine learning models using a privacy-preserving federated learning approach. Authorization of communications among AR/VR devices in the Metaverse is conducted through a Zero-Trust security-enabled rule engine. A system testbed has been implemented within a 5G environment, utilizing Ericsson new Radio with Open5GS 5G core. © 2024 IEEE.},
keywords = {5G, 6G, Adversarial machine learning, Bill of materials, Block-chain, Blockchain, Curricula, Data privacy, Distance education, Federated learning, Generative adversarial networks, Generative-AI, Hardware security, Llama2, LLM, Medium access control, Metaverse, Metaverses, Network Security, Nft, Non-fungible token, Personnel training, Problem oriented languages, Reference architecture, Steganography},
pubstate = {published},
tppubtype = {inproceedings}
}
Chan, A.; Liu, J. A.
Board 24: Development of Multi-User-enabled, Interactive, and Responsive Virtual/Augmented Reality-based Laboratory Training System Proceedings Article
In: ASEE Annu. Conf. Expos. Conf. Proc., American Society for Engineering Education, 2024, ISBN: 21535965 (ISSN).
Abstract | Links | BibTeX | Tags: 'current, Augmented Reality, Chemical engineering students, Concentration (process), Cooling towers, Degassing, Hands-on learning, Hydrodesulfurization, Immersive, Large groups, Liquid crystal displays, Multiusers, Nanoreactors, Personnel training, Pilot-scale equipment, Protective equipment, Students, Training Systems, Unit Operations Laboratory
@inproceedings{chan_board_2024,
title = {Board 24: Development of Multi-User-enabled, Interactive, and Responsive Virtual/Augmented Reality-based Laboratory Training System},
author = {A. Chan and J. A. Liu},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85202042560&partnerID=40&md5=9b1565ea2dd4336b4cc45594fe4f7900},
isbn = {21535965 (ISSN)},
year = {2024},
date = {2024-01-01},
booktitle = {ASEE Annu. Conf. Expos. Conf. Proc.},
publisher = {American Society for Engineering Education},
abstract = {The Unit Operations Laboratory (UOL) is a place where third-year chemical engineering students can apply their engineering and science concepts on pilot-scale equipment. However, the physical lab is resource-intensive, requiring protective equipment and constant supervision. In addition, due to limited units for large groups of students, students perform experiments according to the rolling program schedule, making alignment with lecture teaching and hands-on learning challenges. The research team focuses on increasing the accessibility of the UOL by using simulation gaming in standard, virtual reality and augmented reality modalities as an educational tool. The "Virtual Unit Ops Lab" application places students in an immersive simulated environment of the physical lab, where they can get practical experiences without the difficulties of an in-person lab by using specialized headsets and controllers, which allows the student to move and interact with various parts of the machine physically. Developed with Unity software, the application serves as a digital twin to an existing lab, which allows for an immersive simulation of the full-scale lab equipment, in addition to enhanced learning features such as the ability to display the current action performed by the user and to provide visual/audio feedback for correct and incorrect actions. The application also supports the use by multiple "players" (i.e., it has the "multiplayer" option), where multiple students can communicate and discuss their current step. As a work in progress, a non-player-character chatbot (generative AI responses) is being developed for existing applications using OpenAI's GPT-3.5, which provides designated information to a student in a conversational manner. Additionally, a supplemental "Augmented Unit Ops Lab" application uses Augmented Reality, which superimposes three-dimensional flow diagrams onto the Heat Exchanger through the view of a phone camera during the in-person labs. © American Society for Engineering Education, 2024.},
keywords = {'current, Augmented Reality, Chemical engineering students, Concentration (process), Cooling towers, Degassing, Hands-on learning, Hydrodesulfurization, Immersive, Large groups, Liquid crystal displays, Multiusers, Nanoreactors, Personnel training, Pilot-scale equipment, Protective equipment, Students, Training Systems, Unit Operations Laboratory},
pubstate = {published},
tppubtype = {inproceedings}
}