AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Gao, H.; Xie, Y.; Kasneci, E.
PerVRML: ChatGPT-Driven Personalized VR Environments for Machine Learning Education Journal Article
In: International Journal of Human-Computer Interaction, 2025, ISSN: 10447318 (ISSN); 15327590 (ISSN), (Publisher: Taylor and Francis Ltd.).
Abstract | Links | BibTeX | Tags: Backpropagation, ChatGPT, Curricula, Educational robots, Immersive learning, Interactive learning, Language Model, Large language model, large language models, Learning mode, Machine learning education, Machine-learning, Personalized learning, Support vector machines, Teaching, Virtual Reality, Virtual-reality environment, Virtualization
@article{gao_pervrml_2025,
title = {PerVRML: ChatGPT-Driven Personalized VR Environments for Machine Learning Education},
author = {H. Gao and Y. Xie and E. Kasneci},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005776517&doi=10.1080%2F10447318.2025.2504188&partnerID=40&md5=27accdeba3e1e2202fc1102053d54b7c},
doi = {10.1080/10447318.2025.2504188},
issn = {10447318 (ISSN); 15327590 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {International Journal of Human-Computer Interaction},
abstract = {The advent of large language models (LLMs) such as ChatGPT has demonstrated significant potential for advancing educational technologies. Recently, growing interest has emerged in integrating ChatGPT with virtual reality (VR) to provide interactive and dynamic learning environments. This study explores the effectiveness of ChatGTP-driven VR in facilitating machine learning education through PerVRML. PerVRML incorporates a ChatGPT-powered avatar that provides real-time assistance and uses LLMs to personalize learning paths based on various sensor data from VR. A between-subjects design was employed to compare two learning modes: personalized and non-personalized. Quantitative data were collected from assessments, user experience surveys, and interaction metrics. The results indicate that while both learning modes supported learning effectively, ChatGPT-powered personalization significantly improved learning outcomes and had distinct impacts on user feedback. These findings underscore the potential of ChatGPT-enhanced VR to deliver adaptive and personalized educational experiences. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Taylor and Francis Ltd.},
keywords = {Backpropagation, ChatGPT, Curricula, Educational robots, Immersive learning, Interactive learning, Language Model, Large language model, large language models, Learning mode, Machine learning education, Machine-learning, Personalized learning, Support vector machines, Teaching, Virtual Reality, Virtual-reality environment, Virtualization},
pubstate = {published},
tppubtype = {article}
}
Nygren, T.; Samuelsson, M.; Hansson, P. -O.; Efimova, E.; Bachelder, S.
In: International Journal of Artificial Intelligence in Education, 2025, ISSN: 15604306 (ISSN); 15604292 (ISSN), (Publisher: Springer).
Abstract | Links | BibTeX | Tags: AI-generated feedback, Controversial issue in social study education, Controversial issues in social studies education, Curricula, Domain knowledge, Economic and social effects, Expert systems, Generative AI, Human engineering, Knowledge engineering, Language Model, Large language model, large language models (LLMs), Mixed reality, Mixed reality simulation, Mixed reality simulation (MRS), Pedagogical content knowledge, Pedagogical content knowledge (PCK), Personnel training, Preservice teachers, Social studies education, Teacher training, Teacher training simulation, Teacher training simulations, Teaching, Training simulation
@article{nygren_ai_2025,
title = {AI Versus Human Feedback in Mixed Reality Simulations: Comparing LLM and Expert Mentoring in Preservice Teacher Education on Controversial Issues},
author = {T. Nygren and M. Samuelsson and P. -O. Hansson and E. Efimova and S. Bachelder},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007244772&doi=10.1007%2Fs40593-025-00484-8&partnerID=40&md5=3404a614af6fe4d4d2cb284060600e3c},
doi = {10.1007/s40593-025-00484-8},
issn = {15604306 (ISSN); 15604292 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {International Journal of Artificial Intelligence in Education},
abstract = {This study explores the potential role of AI-generated mentoring within simulated environments designed for teacher education, specifically focused on the challenges of teaching controversial issues. Using a mixed-methods approach, we empirically investigate the potential and challenges of AI-generated feedback compared to that provided by human experts when mentoring preservice teachers in the context of mixed reality simulations. Findings reveal that human experts offered more mixed and nuanced feedback than ChatGPT-4o and Perplexity, especially when identifying missed teaching opportunities and balancing classroom discussions. The AI models evaluated were publicly available pro versions of LLMs and were tested using detailed prompts and coding schemes aligned with educational theories. AI systems were not very good at identifying aspects of general, pedagogical or content knowledge based on Shulman’s theories but were still quite effective in generating feedback in line with human experts. The study highlights the promise of AI to enhance teacher training but underscores the importance of combining AI feedback with expert insights to address the complexities of real-world teaching. This research contributes to a growing understanding of AI's potential role and limitations in education. It suggests that, while AI can be valuable to scale mixed reality simulations, it should be carefully evaluated and balanced by human expertise in teacher education. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Springer},
keywords = {AI-generated feedback, Controversial issue in social study education, Controversial issues in social studies education, Curricula, Domain knowledge, Economic and social effects, Expert systems, Generative AI, Human engineering, Knowledge engineering, Language Model, Large language model, large language models (LLMs), Mixed reality, Mixed reality simulation, Mixed reality simulation (MRS), Pedagogical content knowledge, Pedagogical content knowledge (PCK), Personnel training, Preservice teachers, Social studies education, Teacher training, Teacher training simulation, Teacher training simulations, Teaching, Training simulation},
pubstate = {published},
tppubtype = {article}
}
Anvitha, K.; Durjay, T.; Sathvika, K.; Gnanendra, G.; Annamalai, S.; Natarajan, S. K.
EduBot: A Compact AI-Driven Study Assistant for Contextual Knowledge Retrieval Proceedings Article
In: Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331507756 (ISBN).
Abstract | Links | BibTeX | Tags: Chatbots, Computer aided instruction, Contextual knowledge, Curricula, Digital Education, E-Learning, Education computing, Educational Technology, Engineering education, Indexing (of information), Information Retrieval, Intelligent systems, Knowledge retrieval, LangChain Framework, Language Model, Large language model, learning experience, Learning experiences, Learning systems, LLM, PDF - Driven Chatbot, Query processing, Students, Teaching, Traditional learning, Virtual Reality
@inproceedings{anvitha_edubot_2025,
title = {EduBot: A Compact AI-Driven Study Assistant for Contextual Knowledge Retrieval},
author = {K. Anvitha and T. Durjay and K. Sathvika and G. Gnanendra and S. Annamalai and S. K. Natarajan},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105013615976&doi=10.1109%2FGINOTECH63460.2025.11077097&partnerID=40&md5=b08377283f2ea2ee406d38d1d23f1e42},
doi = {10.1109/GINOTECH63460.2025.11077097},
isbn = {9798331507756 (ISBN)},
year = {2025},
date = {2025-01-01},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {In the evolving landscape of educational technology, intelligent systems are redefining traditional learning methods by enhancing accessibility, adaptability, and engagement in instructional processes. This paper presents EduBot, a PDF-Driven Chatbot developed using advanced Large Language Models (LLMs) and leveraging frameworks like LangChain, OpenAI's Chat-Gpt, and Pinecone. EduBot is designed as an interactive educational assistant, responding to student queries based on faculty-provided guidelines embedded in PDF documents. Through natural language processing, EduBot streamlines information retrieval, providing accurate, context-aware responses that foster a self- directed learning experience. By aligning with specific academic requirements and enhancing clarity in information delivery, EduBot stands as a promising tool in personalized digital learning support. This paper explores the design, implementation, and impact of EduBot, offering insights into its potential as a scalable solution for academic institutions The demand for accessible and adaptive educational tools is increasing as students seek more personalized and efficient ways to enhance their learning experience. EduBot is a cutting- edge PDF-driven chatbot designed to act as a virtual educational assistant, helping students to navigate and understand course materials by answering queries directly based on faculty guidelines. Built upon Large Language Models (LLMs), specifically utilizing frameworks such as LangChain and OpenAI's GPT-3.5, EduBot provides a sophisticated solution for integrating curated academic content into interactive learning. With its backend support from Pinecone for optimized data indexing, EduBot offers accurate and context-specific responses, facilitating a deeper level of engagement and comprehension. The average relevancy score is 80%. This paper outlines the design and deployment of EduBot, emphasizing its architecture, adaptability, and contributions to the educational landscape, where such AI- driven tools are poised to become indispensable in fostering autonomous, personalized learning environments. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Chatbots, Computer aided instruction, Contextual knowledge, Curricula, Digital Education, E-Learning, Education computing, Educational Technology, Engineering education, Indexing (of information), Information Retrieval, Intelligent systems, Knowledge retrieval, LangChain Framework, Language Model, Large language model, learning experience, Learning experiences, Learning systems, LLM, PDF - Driven Chatbot, Query processing, Students, Teaching, Traditional learning, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Boubakri, F. -E.; Kadri, M.; Kaghat, F. Z.; Azough, A.; Tairi, H.
Exploring 3D Cardiac Anatomy with Text-Based AI Guidance in Virtual Reality Proceedings Article
In: pp. 43–48, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331534899 (ISBN).
Abstract | Links | BibTeX | Tags: 3D cardiac anatomy, 3d heart models, Anatomy education, Anatomy educations, Cardiac anatomy, Collaborative environments, Collaborative learning, Computer aided instruction, Curricula, Design and Development, E-Learning, Education computing, Generative AI, Heart, Immersive environment, Learning systems, Natural language processing systems, Social virtual reality, Students, Teaching, Three dimensional computer graphics, Virtual Reality
@inproceedings{boubakri_exploring_2025,
title = {Exploring 3D Cardiac Anatomy with Text-Based AI Guidance in Virtual Reality},
author = {F. -E. Boubakri and M. Kadri and F. Z. Kaghat and A. Azough and H. Tairi},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105015676741&doi=10.1109%2FSCME62582.2025.11104869&partnerID=40&md5=c961694f97c50adc23b6826dddb265cd},
doi = {10.1109/SCME62582.2025.11104869},
isbn = {9798331534899 (ISBN)},
year = {2025},
date = {2025-01-01},
pages = {43–48},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {This paper presents the design and development of a social virtual reality (VR) classroom focused on cardiac anatomy education for students in grades K-12. The application allows multiple learners to explore a detailed 3D heart model within an immersive and collaborative environment. A crucial part of the system is the integration of a text-based conversational AI interface powered by ChatGPT, which provides immediate, interactive explanations and addresses student inquiries about heart anatomy. The system supports both guided and exploratory learning modes, encourages peer collaboration, and offers personalized support through natural language dialogue. We evaluated the system's effectiveness through a comprehensive study measuring learning perception (LPQ), VR perception (VRPQ), AI perception (AIPQ), and VR-related symptoms (VRSQ). Potential applications include making high-quality cardiac anatomy education more affordable for K-12 schools with limited resources, offering an adaptable AI-based tutoring system for students to learn at their own pace, and equipping educators with an easy-to-use tool to integrate into their science curriculum with minimal additional training. © 2025 Elsevier B.V., All rights reserved.},
keywords = {3D cardiac anatomy, 3d heart models, Anatomy education, Anatomy educations, Cardiac anatomy, Collaborative environments, Collaborative learning, Computer aided instruction, Curricula, Design and Development, E-Learning, Education computing, Generative AI, Heart, Immersive environment, Learning systems, Natural language processing systems, Social virtual reality, Students, Teaching, Three dimensional computer graphics, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhu, X. T.; Cheerman, H.; Cheng, M.; Kiami, S. R.; Chukoskie, L.; McGivney, E.
Designing VR Simulation System for Clinical Communication Training with LLMs-Based Embodied Conversational Agents Proceedings Article
In: Conf Hum Fact Comput Syst Proc, Association for Computing Machinery, 2025, ISBN: 9798400713958 (ISBN); 9798400713941 (ISBN).
Abstract | Links | BibTeX | Tags: Clinical communications, Clinical Simulation, Communications training, Curricula, Embodied conversational agent, Embodied Conversational Agents, Health professions, Intelligent virtual agents, Language Model, Medical education, Model-based OPC, Patient simulators, Personnel training, Students, Teaching, User centered design, Virtual environments, Virtual Reality, VR simulation, VR simulation systems
@inproceedings{zhu_designing_2025,
title = {Designing VR Simulation System for Clinical Communication Training with LLMs-Based Embodied Conversational Agents},
author = {X. T. Zhu and H. Cheerman and M. Cheng and S. R. Kiami and L. Chukoskie and E. McGivney},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005754066&doi=10.1145%2F3706599.3719693&partnerID=40&md5=6ad72d5adf98c2ca2437b5a3f6508a88},
doi = {10.1145/3706599.3719693},
isbn = {9798400713958 (ISBN); 9798400713941 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Conf Hum Fact Comput Syst Proc},
publisher = {Association for Computing Machinery},
abstract = {VR simulation in Health Professions (HP) education demonstrates huge potential, but fixed learning content with little customization limits its application beyond lab environments. To address these limitations in the context of VR for patient communication training, we conducted a user-centered study involving semi-structured interviews with advanced HP students to understand their challenges in clinical communication training and perceptions of VR-based solutions. From this, we derived design insights emphasizing the importance of realistic scenarios, simple interactions, and unpredictable dialogues. Building on these insights, we developed the Virtual AI Patient Simulator (VAPS), a novel VR system powered by Large Language Models (LLMs) and Embodied Conversational Agents (ECAs), supporting dynamic and customizable patient interactions for immersive learning. We also provided an example of how clinical professors could use user-friendly design forms to create personalized scenarios that align with course objectives in VAPS and discuss future implications of integrating AI-driven technologies into VR education. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Clinical communications, Clinical Simulation, Communications training, Curricula, Embodied conversational agent, Embodied Conversational Agents, Health professions, Intelligent virtual agents, Language Model, Medical education, Model-based OPC, Patient simulators, Personnel training, Students, Teaching, User centered design, Virtual environments, Virtual Reality, VR simulation, VR simulation systems},
pubstate = {published},
tppubtype = {inproceedings}
}
2024
Harinee, S.; Raja, R. Vimal; Mugila, E.; Govindharaj, I.; Sanjaykumar, V.; Ragavendhiran, T.
Elevating Medical Training: A Synergistic Fusion of AI and VR for Immersive Anatomy Learning and Practical Procedure Mastery Proceedings Article
In: Int. Conf. Syst., Comput., Autom. Netw., ICSCAN, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 9798331510022 (ISBN).
Abstract | Links | BibTeX | Tags: 'current, Anatomy education, Anatomy educations, Computer interaction, Curricula, Embodied virtual assistant, Embodied virtual assistants, Generative AI, Human- Computer Interaction, Immersive, Intelligent virtual agents, Medical computing, Medical education, Medical procedure practice, Medical procedures, Medical training, Personnel training, Students, Teaching, Three dimensional computer graphics, Usability engineering, Virtual assistants, Virtual environments, Virtual Reality, Visualization
@inproceedings{harinee_elevating_2024,
title = {Elevating Medical Training: A Synergistic Fusion of AI and VR for Immersive Anatomy Learning and Practical Procedure Mastery},
author = {S. Harinee and R. Vimal Raja and E. Mugila and I. Govindharaj and V. Sanjaykumar and T. Ragavendhiran},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105000334626&doi=10.1109%2FICSCAN62807.2024.10894451&partnerID=40&md5=ae7a491686ade8cebdc276f585a6f4f0},
doi = {10.1109/ICSCAN62807.2024.10894451},
isbn = {9798331510022 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Int. Conf. Syst., Comput., Autom. Netw., ICSCAN},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Virtual reality with its 3D visualization have brought an overwhelming change in the face of medical education, especially for courses like human anatomy. The proposed virtual reality system to bring massive improvements in the education received by a medical student studying for their degree courses. The project puts forward the text-to-speech and speech-to-text aligned system that simplifies the usage of a chatbot empowered by OpenAI GPT-4 and allows pupils to vocally speak with Avatar, the set virtual assistant. Contrary to the current methodologies, the setup of virtual reality is powered by avatars and thus covers an enhanced virtual assistant environment. Avatars offer students the set of repeated practicing of medical procedures on it, and the real uniqueness in the proposed product. The developed virtual reality environment is enhanced over other current training techniques where a student should interact and immerse in three-dimensional human organs for visualization in three dimensions and hence get better knowledge of the subjects in greater depth. A virtual assistant guides the whole process, giving insights and support to help the student bridge the gap from theory to practice. Then, the system is essentially Knowledge based and Analysis based approach. The combination of generative AI along with embodied virtual agents has great potential when it comes to customized virtual conversation assistant for much wider range of applications. The study brings out the value of acquiring hands-on skills through simulated medical procedures and opens new frontiers of research and development in AI, VR, and medical education. In addition to assessing the effectiveness of such novel functionalities, the study also explores user experience related dimensions such as usability, task loading, and the sense of presence in proposed virtual medical environment. © 2025 Elsevier B.V., All rights reserved.},
keywords = {'current, Anatomy education, Anatomy educations, Computer interaction, Curricula, Embodied virtual assistant, Embodied virtual assistants, Generative AI, Human- Computer Interaction, Immersive, Intelligent virtual agents, Medical computing, Medical education, Medical procedure practice, Medical procedures, Medical training, Personnel training, Students, Teaching, Three dimensional computer graphics, Usability engineering, Virtual assistants, Virtual environments, Virtual Reality, Visualization},
pubstate = {published},
tppubtype = {inproceedings}
}
Bandara, E.; Foytik, P.; Shetty, S.; Hassanzadeh, A.
Generative-AI(with Custom-Trained Meta's Llama2 LLM), Blockchain, NFT, Federated Learning and PBOM Enabled Data Security Architecture for Metaverse on 5G/6G Environment Proceedings Article
In: Proc. - IEEE Int. Conf. Mob. Ad-Hoc Smart Syst., MASS, pp. 118–124, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 9798350363999 (ISBN).
Abstract | Links | BibTeX | Tags: 5G, 6G, Adversarial machine learning, Bill of materials, Block-chain, Blockchain, Curricula, Data privacy, Distance education, Federated learning, Generative adversarial networks, Generative-AI, Hardware security, Llama2, LLM, Medium access control, Metaverse, Metaverses, Network Security, Nft, Non-fungible token, Personnel training, Problem oriented languages, Reference architecture, Steganography
@inproceedings{bandara_generative-aicustom-trained_2024,
title = {Generative-AI(with Custom-Trained Meta's Llama2 LLM), Blockchain, NFT, Federated Learning and PBOM Enabled Data Security Architecture for Metaverse on 5G/6G Environment},
author = {E. Bandara and P. Foytik and S. Shetty and A. Hassanzadeh},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85210243120&doi=10.1109%2FMASS62177.2024.00026&partnerID=40&md5=f40a8db565fad44ea6dca76eae709ac2},
doi = {10.1109/MASS62177.2024.00026},
isbn = {9798350363999 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. - IEEE Int. Conf. Mob. Ad-Hoc Smart Syst., MASS},
pages = {118–124},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {The Metaverse is an integrated network of 3D virtual worlds accessible through a virtual reality headset. Its impact on data privacy and security is increasingly recognized as a major concern. There is a growing interest in developing a reference architecture that describes the four core aspects of its data: acquisition, storage, sharing, and interoperability. Establishing a secure data architecture is imperative to manage users' personal data and facilitate trusted AR/VR and AI/ML solutions within the Metaverse. This paper details a reference architecture empowered by Generative-AI, Blockchain, Federated Learning, and Non-Fungible Tokens (NFTs). Within this archi-tecture, various resource providers collaborate via the blockchain network. Handling personal user data and resource provider identities is executed through a Self-Sovereign Identity-enabled privacy-preserving framework. AR/NR devices in the Metaverse are represented as NFT tokens available for user purchase. Software updates and supply-chain verification for these devices are managed using a Software Bill of Materials (SBOM) and a Pipeline Bill of Materials (PBOM) verification system. Moreover, a custom-trained Llama2 LLM from Meta has been integrated to generate PBOMs for AR/NR devices' software updates, thereby preventing malware intrusions and data breaches. This Llama2-13B LLM has been quantized and fine-tuned using Qlora to ensure optimal performance on consumer-grade hardware. The provenance of AI/ML models used in the Metaverse is encapsu-lated as Model Card objects, allowing external parties to audit and verify them, thus mitigating adversarial learning attacks within these models. To the best of our knowledge, this is the very first research effort aimed at standardizing PBOM schemas and integrating Language Model algorithms for the generation of PBOMs. Additionally, a proposed mechanism facilitates different AI/ML providers in training their machine learning models using a privacy-preserving federated learning approach. Authorization of communications among AR/VR devices in the Metaverse is conducted through a Zero-Trust security-enabled rule engine. A system testbed has been implemented within a 5G environment, utilizing Ericsson new Radio with Open5GS 5G core. © 2024 Elsevier B.V., All rights reserved.},
keywords = {5G, 6G, Adversarial machine learning, Bill of materials, Block-chain, Blockchain, Curricula, Data privacy, Distance education, Federated learning, Generative adversarial networks, Generative-AI, Hardware security, Llama2, LLM, Medium access control, Metaverse, Metaverses, Network Security, Nft, Non-fungible token, Personnel training, Problem oriented languages, Reference architecture, Steganography},
pubstate = {published},
tppubtype = {inproceedings}
}