AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Banafa, A.
Artificial intelligence in action: Real-world applications and innovations Book
River Publishers, 2025, ISBN: 978-877004619-0 (ISBN); 978-877004620-6 (ISBN).
Abstract | Links | BibTeX | Tags: 5G, Affective Computing, AGI, AI, AI alignments, AI Ethics, AI hallucinations, AI hype, AI models, Alexa, ANI, ASI, Augmented Reality, Autoencoders, Autonomic computing, Autonomous Cars, Autoregressive models, Big Data, Big Data Analytics, Bitcoin, Blockchain, C3PO, Casual AI, Causal reasoning, ChatGPT, Cloud computing, Collective AI, Compression engines, Computer vision, Conditional Automation, Convolutional neural networks (CNNs), Cryptocurrency, Cybersecurity, Deceptive AI, Deep learning, Digital transformation, Driver Assistance, Driverless Cars, Drones, Elon Musk, Entanglement, Environment and sustainability, Ethereum, Explainable AI, Facebook, Facial Recognition, Feedforward. Neural Networks, Fog Computing, Full Automation, Future of AI, General AI, Generative Adversarial Networks (GANs), Generative AI, Google, Green AI, High Automation, Hybrid Blockchain, IEEE, Industrial Internet of Things (IIoT), Internet of things (IoT), Jarvis, Java, JavaScript, Long Short-Term Memory Networks, LTE, machine learning, Microsoft, MultiModal AI, Narrow AI, Natural disasters, Natural Language Generation (NLG), Natural Language Processing (NLP), NetFlix, Network Security, Neural Networks, Nuclear, Nuclear AI, NYTimes, Objective-driven AI, Open Source, Partial Automation, PayPal, Perfect AI, Private Blockchain, Private Cloud Computing, Programming languages, Python, Quantum Communications, Quantum Computing, Quantum Cryptography, Quantum internet, Quantum Machine Learning (QML), R2D2, Reactive machines. limited memory, Recurrent Neural Networks, Responsible AI, Robots, Sci-Fi movies, Self-Aware, Semiconductorâ??s, Sensate AI, Siri, Small Data, Smart Contracts. Hybrid Cloud Computing, Smart Devices, Sovereign AI, Super AI, Superposition, TensorFlow, Theory of Mind, Thick Data, Twitter, Variational Autoencoders (VAEs), Virtual Reality, Voice user interface (VUI), Wearable computing devices (WCD), Wearable Technology, Wi-Fi, XAI, Zero-Trust Model
@book{banafa_artificial_2025,
title = {Artificial intelligence in action: Real-world applications and innovations},
author = {A. Banafa},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105000403587&partnerID=40&md5=4b0d94be48194a942b22bef63f36d3bf},
isbn = {978-877004619-0 (ISBN); 978-877004620-6 (ISBN)},
year = {2025},
date = {2025-01-01},
publisher = {River Publishers},
series = {Artificial Intelligence in Action: Real-World Applications and Innovations},
abstract = {This comprehensive book dives deep into the current landscape of AI, exploring its fundamental principles, development challenges, potential risks, and the cutting-edge breakthroughs that are propelling it forward. Artificial intelligence (AI) is rapidly transforming industries and societies worldwide through groundbreaking innovations and real-world applications. Starting with the core concepts, the book examines the various types of AI systems, generative AI models, and the complexities of machine learning. It delves into the programming languages driving AI development, data pipelines, model creation and deployment processes, while shedding light on issues like AI hallucinations and the intricate path of machine unlearning. The book then showcases the remarkable real-world applications of AI across diverse domains. From preventing job displacement and promoting environmental sustainability, to enhancing disaster response, drone technology, and even nuclear energy innovation, it highlights how AI is tackling complex challenges and driving positive change. The book also explores the double-edged nature of AI, recognizing its tremendous potential while cautioning about the risks of misuse, unintended consequences, and the urgent need for responsible development practices. It examines the intersection of AI and fields like operating system design, warfare, and semiconductor technology, underscoring the wide-ranging implications of this transformative force. As the quest for artificial general intelligence (AGI) and superintelligent AI systems intensifies, the book delves into cutting-edge research, emerging trends, and the pursuit of multimodal, explainable, and causally aware AI systems. It explores the symbiotic relationship between AI and human creativity, the rise of user-friendly "casual AI," and the potential of AI to tackle open-ended tasks. This is an essential guide for understanding the profound impact of AI on our world today and its potential to shape our future. From the frontiers of innovation to the challenges of responsible development, this book offers a comprehensive and insightful exploration of the remarkable real-world applications and innovations driving the AI revolution. © 2025 River Publishers. All rights reserved.},
keywords = {5G, Affective Computing, AGI, AI, AI alignments, AI Ethics, AI hallucinations, AI hype, AI models, Alexa, ANI, ASI, Augmented Reality, Autoencoders, Autonomic computing, Autonomous Cars, Autoregressive models, Big Data, Big Data Analytics, Bitcoin, Blockchain, C3PO, Casual AI, Causal reasoning, ChatGPT, Cloud computing, Collective AI, Compression engines, Computer vision, Conditional Automation, Convolutional neural networks (CNNs), Cryptocurrency, Cybersecurity, Deceptive AI, Deep learning, Digital transformation, Driver Assistance, Driverless Cars, Drones, Elon Musk, Entanglement, Environment and sustainability, Ethereum, Explainable AI, Facebook, Facial Recognition, Feedforward. Neural Networks, Fog Computing, Full Automation, Future of AI, General AI, Generative Adversarial Networks (GANs), Generative AI, Google, Green AI, High Automation, Hybrid Blockchain, IEEE, Industrial Internet of Things (IIoT), Internet of things (IoT), Jarvis, Java, JavaScript, Long Short-Term Memory Networks, LTE, machine learning, Microsoft, MultiModal AI, Narrow AI, Natural disasters, Natural Language Generation (NLG), Natural Language Processing (NLP), NetFlix, Network Security, Neural Networks, Nuclear, Nuclear AI, NYTimes, Objective-driven AI, Open Source, Partial Automation, PayPal, Perfect AI, Private Blockchain, Private Cloud Computing, Programming languages, Python, Quantum Communications, Quantum Computing, Quantum Cryptography, Quantum internet, Quantum Machine Learning (QML), R2D2, Reactive machines. limited memory, Recurrent Neural Networks, Responsible AI, Robots, Sci-Fi movies, Self-Aware, Semiconductorâ??s, Sensate AI, Siri, Small Data, Smart Contracts. Hybrid Cloud Computing, Smart Devices, Sovereign AI, Super AI, Superposition, TensorFlow, Theory of Mind, Thick Data, Twitter, Variational Autoencoders (VAEs), Virtual Reality, Voice user interface (VUI), Wearable computing devices (WCD), Wearable Technology, Wi-Fi, XAI, Zero-Trust Model},
pubstate = {published},
tppubtype = {book}
}
Behravan, M.; Matković, K.; Grǎcanin, D.
Generative AI for Context-Aware 3D Object Creation Using Vision-Language Models in Augmented Reality Proceedings Article
In: Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR, pp. 73–81, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331521578 (ISBN).
Abstract | Links | BibTeX | Tags: 3D object, 3D Object Generation, Artificial intelligence systems, Augmented Reality, Capture images, Context-Aware, Generative adversarial networks, Generative AI, generative artificial intelligence, Generative model, Language Model, Object creation, Vision language model, vision language models, Visual languages
@inproceedings{behravan_generative_2025,
title = {Generative AI for Context-Aware 3D Object Creation Using Vision-Language Models in Augmented Reality},
author = {M. Behravan and K. Matković and D. Grǎcanin},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105000292700&doi=10.1109%2FAIxVR63409.2025.00018&partnerID=40&md5=0a11897a4f37fd8ebaa257498cb92eb7},
doi = {10.1109/AIxVR63409.2025.00018},
isbn = {9798331521578 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR},
pages = {73–81},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {We present a novel Artificial Intelligence (AI) system that functions as a designer assistant in augmented reality (AR) environments. Leveraging Vision Language Models (VLMs) like LLaVA and advanced text-to-3D generative models, users can capture images of their surroundings with an Augmented Reality (AR) headset. The system analyzes these images to recommend contextually relevant objects that enhance both functionality and visual appeal. The recommended objects are generated as 3D models and seamlessly integrated into the AR environment for interactive use. Our system utilizes open-source AI models running on local systems to enhance data security and reduce operational costs. Key features include context-aware object suggestions, optimal placement guidance, aesthetic matching, and an intuitive user interface for real-time interaction. Evaluations using the COCO 2017 dataset and real-world AR testing demonstrated high accuracy in object detection and contextual fit rating of 4.1 out of 5. By addressing the challenge of providing context-aware object recommendations in AR, our system expands the capabilities of AI applications in this domain. It enables users to create personalized digital spaces efficiently, leveraging AI for contextually relevant suggestions. © 2025 Elsevier B.V., All rights reserved.},
keywords = {3D object, 3D Object Generation, Artificial intelligence systems, Augmented Reality, Capture images, Context-Aware, Generative adversarial networks, Generative AI, generative artificial intelligence, Generative model, Language Model, Object creation, Vision language model, vision language models, Visual languages},
pubstate = {published},
tppubtype = {inproceedings}
}
Liu, G.; Du, H.; Wang, J.; Niyato, D.; Kim, D. I.
Contract-Inspired Contest Theory for Controllable Image Generation in Mobile Edge Metaverse Journal Article
In: IEEE Transactions on Mobile Computing, vol. 24, no. 8, pp. 7389–7405, 2025, ISSN: 15361233 (ISSN), (Publisher: Institute of Electrical and Electronics Engineers Inc.).
Abstract | Links | BibTeX | Tags: Contest Theory, Deep learning, Deep reinforcement learning, Diffusion Model, Generative adversarial networks, Generative AI, High quality, Image generation, Image generations, Immersive technologies, Metaverses, Mobile edge computing, Reinforcement Learning, Reinforcement learnings, Resource allocation, Resources allocation, Semantic data, Virtual addresses, Virtual environments, Virtual Reality
@article{liu_contract-inspired_2025,
title = {Contract-Inspired Contest Theory for Controllable Image Generation in Mobile Edge Metaverse},
author = {G. Liu and H. Du and J. Wang and D. Niyato and D. I. Kim},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105000066834&doi=10.1109%2FTMC.2025.3550815&partnerID=40&md5=f95abb0df00e3112fa2c15ee77eb41bc},
doi = {10.1109/TMC.2025.3550815},
issn = {15361233 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Transactions on Mobile Computing},
volume = {24},
number = {8},
pages = {7389–7405},
abstract = {The rapid advancement of immersive technologies has propelled the development of the Metaverse, where the convergence of virtual and physical realities necessitates the generation of high-quality, photorealistic images to enhance user experience. However, generating these images, especially through Generative Diffusion Models (GDMs), in mobile edge computing environments presents significant challenges due to the limited computing resources of edge devices and the dynamic nature of wireless networks. This paper proposes a novel framework that integrates contract-inspired contest theory, Deep Reinforcement Learning (DRL), and GDMs to optimize image generation in these resource-constrained environments. The framework addresses the critical challenges of resource allocation and semantic data transmission quality by incentivizing edge devices to efficiently transmit high-quality semantic data, which is essential for creating realistic and immersive images. The use of contest and contract theory ensures that edge devices are motivated to allocate resources effectively, while DRL dynamically adjusts to network conditions, optimizing the overall image generation process. Experimental results demonstrate that the proposed approach not only improves the quality of generated images but also achieves superior convergence speed and stability compared to traditional methods. This makes the framework particularly effective for optimizing complex resource allocation tasks in mobile edge Metaverse applications, offering enhanced performance and efficiency in creating immersive virtual environments. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Institute of Electrical and Electronics Engineers Inc.},
keywords = {Contest Theory, Deep learning, Deep reinforcement learning, Diffusion Model, Generative adversarial networks, Generative AI, High quality, Image generation, Image generations, Immersive technologies, Metaverses, Mobile edge computing, Reinforcement Learning, Reinforcement learnings, Resource allocation, Resources allocation, Semantic data, Virtual addresses, Virtual environments, Virtual Reality},
pubstate = {published},
tppubtype = {article}
}
Gatti, E.; Giunchi, D.; Numan, N.; Steed, A.
Around the Virtual Campfire: Early UX Insights into AI-Generated Stories in VR Proceedings Article
In: Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR, pp. 136–141, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331521578 (ISBN).
Abstract | Links | BibTeX | Tags: Generative AI, Images synthesis, Immersive, Interactive Environments, Language Model, Large language model, Storytelling, User input, User study, Users' experiences, Virtual environments, VR
@inproceedings{gatti_around_2025,
title = {Around the Virtual Campfire: Early UX Insights into AI-Generated Stories in VR},
author = {E. Gatti and D. Giunchi and N. Numan and A. Steed},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105000263662&doi=10.1109%2FAIxVR63409.2025.00027&partnerID=40&md5=ab95e803af14233db6ed307222632542},
doi = {10.1109/AIxVR63409.2025.00027},
isbn = {9798331521578 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR},
pages = {136–141},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Virtual Reality (VR) presents an immersive platform for storytelling, allowing narratives to unfold in highly engaging, interactive environments. Leveraging AI capabilities and image synthesis offers new possibilities for creating scalable, generative VR content. In this work, we use an LLM-driven VR storytelling platform to explore how AI-generated visuals and narrative elements impact the user experience in VR storytelling. Previously, we presented AIsop, a system to integrate LLM-generated text and images and TTS audio into a storytelling experience, where the narrative unfolds based on user input. In this paper, we present two user studies focusing on how AI-generated visuals influence narrative perception and the overall VR experience. Our findings highlight the positive impact of AI-generated pictorial content on the storytelling experience, highlighting areas for enhancement and further research in interactive narrative design. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Generative AI, Images synthesis, Immersive, Interactive Environments, Language Model, Large language model, Storytelling, User input, User study, Users' experiences, Virtual environments, VR},
pubstate = {published},
tppubtype = {inproceedings}
}
Tracy, K.; Spantidi, O.
Impact of GPT-Driven Teaching Assistants in VR Learning Environments Journal Article
In: IEEE Transactions on Learning Technologies, vol. 18, pp. 192–205, 2025, ISSN: 19391382 (ISSN), (Publisher: Institute of Electrical and Electronics Engineers Inc.).
Abstract | Links | BibTeX | Tags: Adversarial machine learning, Cognitive loads, Computer interaction, Contrastive Learning, Control groups, Experimental groups, Federated learning, Generative AI, Generative artificial intelligence (GenAI), human–computer interaction, Interactive learning environment, interactive learning environments, Learning efficacy, Learning outcome, learning outcomes, Student engagement, Teaching assistants, Virtual environments, Virtual Reality (VR)
@article{tracy_impact_2025,
title = {Impact of GPT-Driven Teaching Assistants in VR Learning Environments},
author = {K. Tracy and O. Spantidi},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105001083336&doi=10.1109%2FTLT.2025.3539179&partnerID=40&md5=fc4deb58acaf5bac8f4805ef7035396d},
doi = {10.1109/TLT.2025.3539179},
issn = {19391382 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Transactions on Learning Technologies},
volume = {18},
pages = {192–205},
abstract = {Virtual reality (VR) has emerged as a transformative educational tool, enabling immersive learning environments that promote student engagement and understanding of complex concepts. However, despite the growing adoption of VR in education, there remains a significant gap in research exploring how generative artificial intelligence (AI), such as generative pretrained transformer can further enhance these experiences by reducing cognitive load and improving learning outcomes. This study examines the impact of an AI-driven instructor assistant in VR classrooms on student engagement, cognitive load, knowledge retention, and performance. A total of 52 participants were divided into two groups experiencing a VR lesson on the bubble sort algorithm, one with only a prescripted virtual instructor (control group), and the other with the addition of an AI instructor assistant (experimental group). Statistical analysis of postlesson quizzes and cognitive load assessments was conducted using independent t-tests and analysis of variance (ANOVA), with the cognitive load being measured through a postexperiment questionnaire. The study results indicate that the experimental group reported significantly higher engagement compared to the control group. While the AI assistant did not significantly improve postlesson assessment scores, it enhanced conceptual knowledge transfer. The experimental group also demonstrated lower intrinsic cognitive load, suggesting the assistant reduced the perceived complexity of the material. Higher germane and general cognitive loads indicated that students were more invested in meaningful learning without feeling overwhelmed. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Institute of Electrical and Electronics Engineers Inc.},
keywords = {Adversarial machine learning, Cognitive loads, Computer interaction, Contrastive Learning, Control groups, Experimental groups, Federated learning, Generative AI, Generative artificial intelligence (GenAI), human–computer interaction, Interactive learning environment, interactive learning environments, Learning efficacy, Learning outcome, learning outcomes, Student engagement, Teaching assistants, Virtual environments, Virtual Reality (VR)},
pubstate = {published},
tppubtype = {article}
}
Angelopoulos, J.; Manettas, C.; Alexopoulos, K.
Industrial Maintenance Optimization Based on the Integration of Large Language Models (LLM) and Augmented Reality (AR) Proceedings Article
In: K., Alexopoulos; S., Makris; P., Stavropoulos (Ed.): Lect. Notes Mech. Eng., pp. 197–205, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 21954356 (ISSN); 978-303186488-9 (ISBN).
Abstract | Links | BibTeX | Tags: Augmented Reality, Competition, Cost reduction, Critical path analysis, Crushed stone plants, Generative AI, generative artificial intelligence, Human expertise, Industrial equipment, Industrial maintenance, Language Model, Large language model, Maintenance, Maintenance optimization, Maintenance procedures, Manufacturing data processing, Potential errors, Problem oriented languages, Scheduled maintenance, Shopfloors, Solar power plants
@inproceedings{angelopoulos_industrial_2025,
title = {Industrial Maintenance Optimization Based on the Integration of Large Language Models (LLM) and Augmented Reality (AR)},
author = {J. Angelopoulos and C. Manettas and K. Alexopoulos},
editor = {Alexopoulos K. and Makris S. and Stavropoulos P.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105001421726&doi=10.1007%2f978-3-031-86489-6_20&partnerID=40&md5=63be31b9f4dda4aafd6a641630506c09},
doi = {10.1007/978-3-031-86489-6_20},
isbn = {21954356 (ISSN); 978-303186488-9 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Mech. Eng.},
pages = {197–205},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {Traditional maintenance procedures often rely on manual data processing and human expertise, leading to inefficiencies and potential errors. In the context of Industry 4.0 several digital technologies, such as Artificial Intelligence (AI), Big Data Analytics (BDA), and eXtended Reality (XR) have been developed and are constantly being integrated in a plethora of manufacturing activities (including industrial maintenance), in an attempt to minimize human error, facilitate shop floor technicians, reduce costs as well as reduce equipment downtimes. The latest developments in the field of AI point towards Large Language Models (LLM) which can communicate with human operators in an intuitive manner. On the other hand, Augmented Reality, as part of XR technologies, offers useful functionalities for improving user perception and interaction with modern, complex industrial equipment. Therefore, the context of this research work lies in the development and training of an LLM in order to provide suggestions and actionable items for the mitigation of unforeseen events (e.g. equipment breakdowns), in order to facilitate shop-floor technicians during their everyday tasks. Paired with AR visualizations over the physical environment, the technicians will get instructions for performing tasks and checks on the industrial equipment in a manner similar to human-to-human communication. The functionality of the proposed framework extends to the integration of modules for exchanging information with the engineering department towards the scheduling of Maintenance and Repair Operations (MRO) as well as the creation of a repository of historical data in order to constantly retrain and optimize the LLM. © The Author(s) 2025.},
keywords = {Augmented Reality, Competition, Cost reduction, Critical path analysis, Crushed stone plants, Generative AI, generative artificial intelligence, Human expertise, Industrial equipment, Industrial maintenance, Language Model, Large language model, Maintenance, Maintenance optimization, Maintenance procedures, Manufacturing data processing, Potential errors, Problem oriented languages, Scheduled maintenance, Shopfloors, Solar power plants},
pubstate = {published},
tppubtype = {inproceedings}
}
Paduraru, C.; Bouruc, P. -L.; Stefanescu, A.
Generative AI for Human 3D Body Emotions: A Dataset and Baseline Methods Proceedings Article
In: A.P., Rocha; L., Steels; H.J., Herik (Ed.): Int. Conf. Agent. Artif. Intell., pp. 646–653, Science and Technology Publications, Lda, 2025, ISBN: 21843589 (ISSN).
Abstract | Links | BibTeX | Tags: Animations, Body Emotions, Generative AI, Parametric Models
@inproceedings{paduraru_generative_2025,
title = {Generative AI for Human 3D Body Emotions: A Dataset and Baseline Methods},
author = {C. Paduraru and P. -L. Bouruc and A. Stefanescu},
editor = {Rocha A.P. and Steels L. and Herik H.J.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105001951577&doi=10.5220%2f0013168700003890&partnerID=40&md5=7fa058a0c9ec8275083b55e8990a8d22},
doi = {10.5220/0013168700003890},
isbn = {21843589 (ISSN)},
year = {2025},
date = {2025-01-01},
booktitle = {Int. Conf. Agent. Artif. Intell.},
volume = {3},
pages = {646–653},
publisher = {Science and Technology Publications, Lda},
abstract = {Accurate and expressive representation of human emotions in 3D models remains a major challenge in various industries, including gaming, film, healthcare, virtual reality and robotics. This work aims to address this challenge by utilizing a new dataset and a set of baseline methods within an open-source framework developed to improve realism and emotional expressiveness in human 3D representations. At the center of this work is the use of a novel and diverse dataset consisting of short video clips showing people mimicking specific emotions: anger, happiness, surprise, disgust, sadness, and fear. The dataset was further processed using state-of-theart parametric body models that accurately reproduce these emotions. The resulting 3D meshes were then integrated into a generative pose generation model capable of producing similar emotions. © 2025 by SCITEPRESS – Science and Technology Publications, Lda.},
keywords = {Animations, Body Emotions, Generative AI, Parametric Models},
pubstate = {published},
tppubtype = {inproceedings}
}
Li, Y.; Pang, E. C. H.; Ng, C. S. Y.; Azim, M.; Leung, H.
Enhancing Linear Algebra Education with AI-Generated Content in the CityU Metaverse: A Comparative Study Proceedings Article
In: T., Hao; J.G., Wu; X., Luo; Y., Sun; Y., Mu; S., Ge; W., Xie (Ed.): Lect. Notes Comput. Sci., pp. 3–16, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-981964406-3 (ISBN).
Abstract | Links | BibTeX | Tags: Comparatives studies, Digital age, Digital interactions, digital twin, Educational metaverse, Engineering education, Generative AI, Immersive, Matrix algebra, Metaverse, Metaverses, Personnel training, Students, Teaching, University campus, Virtual environments, virtual learning environment, Virtual learning environments, Virtual Reality, Virtualization
@inproceedings{li_enhancing_2025,
title = {Enhancing Linear Algebra Education with AI-Generated Content in the CityU Metaverse: A Comparative Study},
author = {Y. Li and E. C. H. Pang and C. S. Y. Ng and M. Azim and H. Leung},
editor = {Hao T. and Wu J.G. and Luo X. and Sun Y. and Mu Y. and Ge S. and Xie W.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105003632691&doi=10.1007%2f978-981-96-4407-0_1&partnerID=40&md5=c067ba5d4c15e9c0353bf315680531fc},
doi = {10.1007/978-981-96-4407-0_1},
isbn = {03029743 (ISSN); 978-981964406-3 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15589 LNCS},
pages = {3–16},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {In today’s digital age, the metaverse is emerging as the forthcoming evolution of the internet. It provides an immersive space that marks a new frontier in the way digital interactions are facilitated and experienced. In this paper, we present the CityU Metaverse, which aims to construct a digital twin of our university campus. It is designed as an educational virtual world where learning applications can be embedded in this virtual campus, supporting not only remote and collaborative learning but also professional technical training to enhance educational experiences through immersive and interactive learning. To evaluate the effectiveness of this educational metaverse, we conducted an experiment focused on 3D linear transformation in linear algebra, with teaching content generated by generative AI, comparing our metaverse system with traditional teaching methods. Knowledge tests and surveys assessing learning interest revealed that students engaged with the CityU Metaverse, facilitated by AI-generated content, outperformed those in traditional settings and reported greater enjoyment during the learning process. The work provides valuable perspectives on the behaviors and interactions within the metaverse by analyzing user preferences and learning outcomes. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.},
keywords = {Comparatives studies, Digital age, Digital interactions, digital twin, Educational metaverse, Engineering education, Generative AI, Immersive, Matrix algebra, Metaverse, Metaverses, Personnel training, Students, Teaching, University campus, Virtual environments, virtual learning environment, Virtual learning environments, Virtual Reality, Virtualization},
pubstate = {published},
tppubtype = {inproceedings}
}
Stacchio, L.; Balloni, E.; Frontoni, E.; Paolanti, M.; Zingaretti, P.; Pierdicca, R.
MineVRA: Exploring the Role of Generative AI-Driven Content Development in XR Environments through a Context-Aware Approach Journal Article
In: IEEE Transactions on Visualization and Computer Graphics, vol. 31, no. 5, pp. 3602–3612, 2025, ISSN: 10772626 (ISSN), (Publisher: IEEE Computer Society).
Abstract | Links | BibTeX | Tags: adult, Article, Artificial intelligence, Computer graphics, Computer vision, Content Development, Contents development, Context-Aware, Context-aware approaches, Extended reality, female, Generative adversarial networks, Generative AI, generative artificial intelligence, human, Human-in-the-loop, Immersive, Immersive environment, male, Multi-modal, User need, Virtual environments, Virtual Reality
@article{stacchio_minevra_2025,
title = {MineVRA: Exploring the Role of Generative AI-Driven Content Development in XR Environments through a Context-Aware Approach},
author = {L. Stacchio and E. Balloni and E. Frontoni and M. Paolanti and P. Zingaretti and R. Pierdicca},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105003746367&doi=10.1109%2FTVCG.2025.3549160&partnerID=40&md5=3356eb968b3e6a0d3c9b75716b05fac4},
doi = {10.1109/TVCG.2025.3549160},
issn = {10772626 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Transactions on Visualization and Computer Graphics},
volume = {31},
number = {5},
pages = {3602–3612},
abstract = {The convergence of Artificial Intelligence (AI), Computer Vision (CV), Computer Graphics (CG), and Extended Reality (XR) is driving innovation in immersive environments. A key challenge in these environments is the creation of personalized 3D assets, traditionally achieved through manual modeling, a time-consuming process that often fails to meet individual user needs. More recently, Generative AI (GenAI) has emerged as a promising solution for automated, context-aware content generation. In this paper, we present MineVRA (Multimodal generative artificial iNtelligence for contExt-aware Virtual Reality Assets), a novel Human-In-The-Loop (HITL) XR framework that integrates GenAI to facilitate coherent and adaptive 3D content generation in immersive scenarios. To evaluate the effectiveness of this approach, we conducted a comparative user study analyzing the performance and user satisfaction of GenAI-generated 3D objects compared to those generated by Sketchfab in different immersive contexts. The results suggest that GenAI can significantly complement traditional 3D asset libraries, with valuable design implications for the development of human-centered XR environments. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: IEEE Computer Society},
keywords = {adult, Article, Artificial intelligence, Computer graphics, Computer vision, Content Development, Contents development, Context-Aware, Context-aware approaches, Extended reality, female, Generative adversarial networks, Generative AI, generative artificial intelligence, human, Human-in-the-loop, Immersive, Immersive environment, male, Multi-modal, User need, Virtual environments, Virtual Reality},
pubstate = {published},
tppubtype = {article}
}
Li, Z.; Zhang, H.; Peng, C.; Peiris, R.
Exploring Large Language Model-Driven Agents for Environment-Aware Spatial Interactions and Conversations in Virtual Reality Role-Play Scenarios Proceedings Article
In: Proc. - IEEE Conf. Virtual Real. 3D User Interfaces, VR, pp. 1–11, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331536459 (ISBN).
Abstract | Links | BibTeX | Tags: Chatbots, Computer simulation languages, Context- awareness, context-awareness, Digital elevation model, Generative AI, Human-AI Interaction, Language Model, Large language model, large language models, Model agents, Role-play simulation, role-play simulations, Role-plays, Spatial interaction, Virtual environments, Virtual Reality, Virtual-reality environment
@inproceedings{li_exploring_2025,
title = {Exploring Large Language Model-Driven Agents for Environment-Aware Spatial Interactions and Conversations in Virtual Reality Role-Play Scenarios},
author = {Z. Li and H. Zhang and C. Peng and R. Peiris},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105002706893&doi=10.1109%2FVR59515.2025.00025&partnerID=40&md5=1987c128f6ec4bd24011388ef9ece179},
doi = {10.1109/VR59515.2025.00025},
isbn = {9798331536459 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Conf. Virtual Real. 3D User Interfaces, VR},
pages = {1–11},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Recent research has begun adopting Large Language Model (LLM) agents to enhance Virtual Reality (VR) interactions, creating immersive chatbot experiences. However, while current studies focus on generating dialogue from user speech inputs, their abilities to generate richer experiences based on the perception of LLM agents' VR environments and interaction cues remain unexplored. Hence, in this work, we propose an approach that enables LLM agents to perceive virtual environments and generate environment-aware interactions and conversations for an embodied human-AI interaction experience in VR environments. Here, we define a schema for describing VR environments and their interactions through text prompts. We evaluate the performance of our method through five role-play scenarios created using our approach in a study with 14 participants. The findings discuss the opportunities and challenges of our proposed approach for developing environment-aware LLM agents that facilitate spatial interactions and conversations within VR role-play scenarios. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Chatbots, Computer simulation languages, Context- awareness, context-awareness, Digital elevation model, Generative AI, Human-AI Interaction, Language Model, Large language model, large language models, Model agents, Role-play simulation, role-play simulations, Role-plays, Spatial interaction, Virtual environments, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Scofano, L.; Sampieri, A.; Matteis, E.; Spinelli, I.; Galasso, F.
Social EgoMesh Estimation Proceedings Article
In: Proc. - IEEE Winter Conf. Appl. Comput. Vis., WACV, pp. 5948–5958, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331510831 (ISBN).
Abstract | Links | BibTeX | Tags: Augmented reality applications, Ego-motion, Egocentric view, Generative AI, Human behaviors, Human mesh recovery, Limited visibility, Recent researches, Three dimensional computer graphics, Video sequences, Virtual and augmented reality
@inproceedings{scofano_social_2025,
title = {Social EgoMesh Estimation},
author = {L. Scofano and A. Sampieri and E. Matteis and I. Spinelli and F. Galasso},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105003632729&doi=10.1109%2FWACV61041.2025.00580&partnerID=40&md5=95666e3f74ddb6b96666a80eace686c5},
doi = {10.1109/WACV61041.2025.00580},
isbn = {9798331510831 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Winter Conf. Appl. Comput. Vis., WACV},
pages = {5948–5958},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Accurately estimating the 3D pose of the camera wearer in egocentric video sequences is crucial to modeling human behavior in virtual and augmented reality applications. The task presents unique challenges due to the limited visibility of the user's body caused by the front-facing camera mounted on their head. Recent research has explored the utilization of the scene and ego-motion, but it has overlooked humans' interactive nature. We propose a novel framework for Social Egocentric Estimation of body MEshes (SEE-ME). Our approach is the first to estimate the wearer's mesh using only a latent probabilistic diffusion model, which we condition on the scene and, for the first time, on the social wearer-interactee interactions. Our in-depth study sheds light on when social interaction matters most for ego-mesh estimation; it quantifies the impact of interpersonal distance and gaze direction. Overall, SEEME surpasses the current best technique, reducing the pose estimation error (MPJPE) by 53%. The code is available at SEEME. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Augmented reality applications, Ego-motion, Egocentric view, Generative AI, Human behaviors, Human mesh recovery, Limited visibility, Recent researches, Three dimensional computer graphics, Video sequences, Virtual and augmented reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Nguyen, A.; Gul, F.; Dang, B.; Huynh, L.; Tuunanen, T.
Designing embodied generative artificial intelligence in mixed reality for active learning in higher education Journal Article
In: Innovations in Education and Teaching International, vol. 62, no. 5, pp. 1632–1647, 2025, ISSN: 14703297 (ISSN); 14703300 (ISSN), (Publisher: Routledge).
Abstract | Links | BibTeX | Tags: Active learning, Generative AI, higher education, Mixed reality, Self-regulated learning
@article{nguyen_designing_2025,
title = {Designing embodied generative artificial intelligence in mixed reality for active learning in higher education},
author = {A. Nguyen and F. Gul and B. Dang and L. Huynh and T. Tuunanen},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105004906187&doi=10.1080%2F14703297.2025.2499177&partnerID=40&md5=5c1e8652b53ab3bf680555a1dcadd5f4},
doi = {10.1080/14703297.2025.2499177},
issn = {14703297 (ISSN); 14703300 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {Innovations in Education and Teaching International},
volume = {62},
number = {5},
pages = {1632–1647},
abstract = {Generative Artificial Intelligence (GenAI) technologies have introduced significant changes to higher education, but the role of Embodied GenAI Agents in Mixed Reality (MR) environments is still relatively unexplored. This study was carried out to develop an embodied GenAI system designed to facilitate active learning, self-regulated learning and enhance human-AI shared regulation in educational settings. The study also aimed to understand how adult learners engage with and perceive these anthropomorphic agents in an immersive MR setting, with a particular focus on their effects on active learning and cognitive load. Using an echeloned Design Science Research (eDSR) approach, we developed an MR learning experience incorporating an Embodied GenAI Agent. The application was demonstrated with 26 higher education learners through questionnaires and observational recordings. Our study contributes to the ongoing design and development of AI-based educational tools, with the potential to afford more active and agentic learning experiences. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Routledge},
keywords = {Active learning, Generative AI, higher education, Mixed reality, Self-regulated learning},
pubstate = {published},
tppubtype = {article}
}
Sousa, R. T.; Oliveira, E. A. M.; Cintra, L. M. F.; Filho, A. R. G. Galvão
Transformative Technologies for Rehabilitation: Leveraging Immersive and AI-Driven Solutions to Reduce Recidivism and Promote Decent Work Proceedings Article
In: Proc. - IEEE Conf. Virtual Real. 3D User Interfaces Abstr. Workshops, VRW, pp. 168–171, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331514846 (ISBN).
Abstract | Links | BibTeX | Tags: AI- Driven Rehabilitation, Artificial intelligence- driven rehabilitation, Emotional intelligence, Engineering education, Generative AI, generative artificial intelligence, Immersive, Immersive technologies, Immersive Technology, Language Model, Large language model, large language models, Skills development, Social Reintegration, Social skills, Sociology, Vocational training
@inproceedings{sousa_transformative_2025,
title = {Transformative Technologies for Rehabilitation: Leveraging Immersive and AI-Driven Solutions to Reduce Recidivism and Promote Decent Work},
author = {R. T. Sousa and E. A. M. Oliveira and L. M. F. Cintra and A. R. G. Galvão Filho},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005140551&doi=10.1109%2FVRW66409.2025.00042&partnerID=40&md5=a8dbe15493fd8361602d049f2b09efe3},
doi = {10.1109/VRW66409.2025.00042},
isbn = {9798331514846 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Conf. Virtual Real. 3D User Interfaces Abstr. Workshops, VRW},
pages = {168–171},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {The reintegration of incarcerated individuals into society presents significant challenges, particularly in addressing barriers related to vocational training, social skill development, and emotional rehabilitation. Immersive technologies, such as Virtual Reality and Augmented Reality, combined with generative Artificial Intelligence (AI) and Large Language Models, offer innovative opportunities to enhance these areas. These technologies create practical, controlled environments for skill acquisition and behavioral training, while generative AI enables dynamic, personalized, and adaptive experiences. This paper explores the broader potential of these integrated technologies in supporting rehabilitation, reducing recidivism, and fostering sustainable employment opportunities and these initiatives align with the overarching equity objective of ensuring Decent Work for All, reinforcing the commitment to inclusive and equitable progress across diverse communities, through the transformative potential of immersive and AI-driven systems in correctional systems. © 2025 Elsevier B.V., All rights reserved.},
keywords = {AI- Driven Rehabilitation, Artificial intelligence- driven rehabilitation, Emotional intelligence, Engineering education, Generative AI, generative artificial intelligence, Immersive, Immersive technologies, Immersive Technology, Language Model, Large language model, large language models, Skills development, Social Reintegration, Social skills, Sociology, Vocational training},
pubstate = {published},
tppubtype = {inproceedings}
}
Lopes, M. K. S.; Falk, T. H.
Generative AI for Personalized Multisensory Immersive Experiences: Challenges and Opportunities for Stress Reduction Proceedings Article
In: Proc. - IEEE Conf. Virtual Real. 3D User Interfaces Abstr. Workshops, VRW, pp. 143–146, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331514846 (ISBN).
Abstract | Links | BibTeX | Tags: Artificial intelligence tools, Environment personalization, Forest bathing, Generative AI, Immersive, Multi-Sensory, Multi-sensory virtual reality, Multisensory, Personalizations, Relaxation, Virtual Reality, Virtualization
@inproceedings{lopes_generative_2025,
title = {Generative AI for Personalized Multisensory Immersive Experiences: Challenges and Opportunities for Stress Reduction},
author = {M. K. S. Lopes and T. H. Falk},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005149501&doi=10.1109%2FVRW66409.2025.00036&partnerID=40&md5=9507cf2dcec341c434e08e8b6f92bfda},
doi = {10.1109/VRW66409.2025.00036},
isbn = {9798331514846 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Conf. Virtual Real. 3D User Interfaces Abstr. Workshops, VRW},
pages = {143–146},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Stress management and relaxation are critical areas of interest in mental health and well-being. Forest bathing is a practice that has been shown to have a positive effect on reducing stress by stimulating all the senses in an immersive nature experience. Since access to nature is not universally available to everyone, virtual reality has emerged as a promising tool to simulate this type of experience. Furthermore, generative artificial intelligence (GenAI) tools offer new opportunities to create highly personalized and immersive experiences that can enhance relaxation and reduce stress. This study explores the potential of personalized multisensory VR environments, designed using GenAI tools, to optimize relaxation and stress relief via two experiments that are currently underway. The first evaluates the effectiveness of non-personalized versus personalized VR scenes generated using AI tools to promote increased relaxation. The second explores the potential benefits of providing the user with additional personalization tools, from adding new virtual elements to the AI-generated scene, to adding AI-generated sounds and scent/haptics customization. Ultimately, this research aims to identify which customizable elements may lead to improved therapeutic benefits for multisensory VR experiences. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Artificial intelligence tools, Environment personalization, Forest bathing, Generative AI, Immersive, Multi-Sensory, Multi-sensory virtual reality, Multisensory, Personalizations, Relaxation, Virtual Reality, Virtualization},
pubstate = {published},
tppubtype = {inproceedings}
}
Behravan, M.; Grǎcanin, D.
From Voices to Worlds: Developing an AI-Powered Framework for 3D Object Generation in Augmented Reality Proceedings Article
In: Proc. - IEEE Conf. Virtual Real. 3D User Interfaces Abstr. Workshops, VRW, pp. 150–155, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331514846 (ISBN).
Abstract | Links | BibTeX | Tags: 3D modeling, 3D object, 3D Object Generation, 3D reconstruction, Augmented Reality, Cutting edges, Generative AI, Interactive computer systems, Language Model, Large language model, large language models, matrix, Multilingual speech interaction, Real- time, Speech enhancement, Speech interaction, Volume Rendering
@inproceedings{behravan_voices_2025,
title = {From Voices to Worlds: Developing an AI-Powered Framework for 3D Object Generation in Augmented Reality},
author = {M. Behravan and D. Grǎcanin},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005153589&doi=10.1109%2FVRW66409.2025.00038&partnerID=40&md5=34311e63349697801caf849bc231e879},
doi = {10.1109/VRW66409.2025.00038},
isbn = {9798331514846 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Conf. Virtual Real. 3D User Interfaces Abstr. Workshops, VRW},
pages = {150–155},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {This paper presents Matrix, an advanced AI-powered framework designed for real-time 3D object generation in Augmented Reality (AR) environments. By integrating a cutting-edge text-to-3D generative AI model, multilingual speech-to-text translation, and large language models (LLMs), the system enables seamless user interactions through spoken commands. The framework processes speech inputs, generates 3D objects, and provides object recommendations based on contextual understanding, enhancing AR experiences. A key feature of this framework is its ability to optimize 3D models by reducing mesh complexity, resulting in significantly smaller file sizes and faster processing on resource-constrained AR devices. Our approach addresses the challenges of high GPU usage, large model output sizes, and real-time system responsiveness, ensuring a smoother user experience. Moreover, the system is equipped with a pre-generated object repository, further reducing GPU load and improving efficiency. We demonstrate the practical applications of this framework in various fields such as education, design, and accessibility, and discuss future enhancements including image-to-3D conversion, environmental object detection, and multimodal support. The open-source nature of the framework promotes ongoing innovation and its utility across diverse industries. © 2025 Elsevier B.V., All rights reserved.},
keywords = {3D modeling, 3D object, 3D Object Generation, 3D reconstruction, Augmented Reality, Cutting edges, Generative AI, Interactive computer systems, Language Model, Large language model, large language models, matrix, Multilingual speech interaction, Real- time, Speech enhancement, Speech interaction, Volume Rendering},
pubstate = {published},
tppubtype = {inproceedings}
}
Kurai, R.; Hiraki, T.; Hiroi, Y.; Hirao, Y.; Perusquía-Hernández, M.; Uchiyama, H.; Kiyokawa, K.
An implementation of MagicCraft: Generating Interactive 3D Objects and Their Behaviors from Text for Commercial Metaverse Platforms Proceedings Article
In: Proc. - IEEE Conf. Virtual Real. 3D User Interfaces Abstr. Workshops, VRW, pp. 1284–1285, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331514846 (ISBN).
Abstract | Links | BibTeX | Tags: 3D modeling, 3D models, 3D object, 3D Object Generation, 3d-modeling, AI-Assisted Design, Generative AI, Immersive, Metaverse, Metaverses, Model skill, Object oriented programming, Programming skills
@inproceedings{kurai_implementation_2025,
title = {An implementation of MagicCraft: Generating Interactive 3D Objects and Their Behaviors from Text for Commercial Metaverse Platforms},
author = {R. Kurai and T. Hiraki and Y. Hiroi and Y. Hirao and M. Perusquía-Hernández and H. Uchiyama and K. Kiyokawa},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005153642&doi=10.1109%2FVRW66409.2025.00288&partnerID=40&md5=fd6d3b8d0dcbc5b9ccd4c31069bb4f4a},
doi = {10.1109/VRW66409.2025.00288},
isbn = {9798331514846 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Conf. Virtual Real. 3D User Interfaces Abstr. Workshops, VRW},
pages = {1284–1285},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Metaverse platforms are rapidly evolving to provide immersive spaces. However, the generation of dynamic and interactive 3D objects remains a challenge due to the need for advanced 3D modeling and programming skills. We present MagicCraft, a system that generates functional 3D objects from natural language prompts. MagicCraft uses generative AI models to manage the entire content creation pipeline: converting user text descriptions into images, transforming images into 3D models, predicting object behavior, and assigning necessary attributes and scripts. It also provides an interactive interface for users to refine generated objects by adjusting features like orientation, scale, seating positions, and grip points. © 2025 Elsevier B.V., All rights reserved.},
keywords = {3D modeling, 3D models, 3D object, 3D Object Generation, 3d-modeling, AI-Assisted Design, Generative AI, Immersive, Metaverse, Metaverses, Model skill, Object oriented programming, Programming skills},
pubstate = {published},
tppubtype = {inproceedings}
}
Behravan, M.; Haghani, M.; Gračanin, D.
Transcending Dimensions Using Generative AI: Real-Time 3D Model Generation in Augmented Reality Proceedings Article
In: J.Y.C., Chen; G., Fragomeni (Ed.): Lect. Notes Comput. Sci., pp. 13–32, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-303193699-9 (ISBN).
Abstract | Links | BibTeX | Tags: 3D Model Generation, 3D modeling, 3D models, 3d-modeling, Augmented Reality, Generative AI, Image-to-3D conversion, Model generation, Object Detection, Object recognition, Objects detection, Real- time, Specialized software, Technical expertise, Three dimensional computer graphics, Usability engineering
@inproceedings{behravan_transcending_2025,
title = {Transcending Dimensions Using Generative AI: Real-Time 3D Model Generation in Augmented Reality},
author = {M. Behravan and M. Haghani and D. Gračanin},
editor = {Chen J.Y.C. and Fragomeni G.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007690904&doi=10.1007%2f978-3-031-93700-2_2&partnerID=40&md5=1c4d643aad88d08cbbc9dd2c02413f10},
doi = {10.1007/978-3-031-93700-2_2},
isbn = {03029743 (ISSN); 978-303193699-9 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15788 LNCS},
pages = {13–32},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {Traditional 3D modeling requires technical expertise, specialized software, and time-intensive processes, making it inaccessible for many users. Our research aims to lower these barriers by combining generative AI and augmented reality (AR) into a cohesive system that allows users to easily generate, manipulate, and interact with 3D models in real time, directly within AR environments. Utilizing cutting-edge AI models like Shap-E, we address the complex challenges of transforming 2D images into 3D representations in AR environments. Key challenges such as object isolation, handling intricate backgrounds, and achieving seamless user interaction are tackled through advanced object detection methods, such as Mask R-CNN. Evaluation results from 35 participants reveal an overall System Usability Scale (SUS) score of 69.64, with participants who engaged with AR/VR technologies more frequently rating the system significantly higher, at 80.71. This research is particularly relevant for applications in gaming, education, and AR-based e-commerce, offering intuitive, model creation for users without specialized skills. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.},
keywords = {3D Model Generation, 3D modeling, 3D models, 3d-modeling, Augmented Reality, Generative AI, Image-to-3D conversion, Model generation, Object Detection, Object recognition, Objects detection, Real- time, Specialized software, Technical expertise, Three dimensional computer graphics, Usability engineering},
pubstate = {published},
tppubtype = {inproceedings}
}
Otsuka, T.; Li, D.; Siriaraya, P.; Nakajima, S.
Development of A Relaxation Support System Utilizing Stereophonic AR Proceedings Article
In: Int. Conf. Comput., Netw. Commun., ICNC, pp. 463–467, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331520960 (ISBN).
Abstract | Links | BibTeX | Tags: Augmented Reality, Environmental sounds, Generative AI, Immersive, Mental Well-being, Soundscapes, Spatial Audio, Stereo image processing, Support method, Support systems, Well being
@inproceedings{otsuka_development_2025,
title = {Development of A Relaxation Support System Utilizing Stereophonic AR},
author = {T. Otsuka and D. Li and P. Siriaraya and S. Nakajima},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105006602014&doi=10.1109%2FICNC64010.2025.10993739&partnerID=40&md5=72d1dea49b65a396c9a6788ce1ed3274},
doi = {10.1109/ICNC64010.2025.10993739},
isbn = {9798331520960 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Int. Conf. Comput., Netw. Commun., ICNC},
pages = {463–467},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Given the high prevalence of stress and anxiety in today's society, there is an urgent need to explore effective methods to help people manage stress. This research aims to develop a relaxation support system using stereophonic augmented reality (AR), designed to help alleviate stress by recreating relaxing environments with immersive stereo soundscapes, including stories created from generative AI and environmental sounds while users are going for a walk. This paper presents a preliminary evaluation of the effectiveness of the proposed relaxation support method. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Augmented Reality, Environmental sounds, Generative AI, Immersive, Mental Well-being, Soundscapes, Spatial Audio, Stereo image processing, Support method, Support systems, Well being},
pubstate = {published},
tppubtype = {inproceedings}
}
Nygren, T.; Samuelsson, M.; Hansson, P. -O.; Efimova, E.; Bachelder, S.
In: International Journal of Artificial Intelligence in Education, 2025, ISSN: 15604306 (ISSN); 15604292 (ISSN), (Publisher: Springer).
Abstract | Links | BibTeX | Tags: AI-generated feedback, Controversial issue in social study education, Controversial issues in social studies education, Curricula, Domain knowledge, Economic and social effects, Expert systems, Generative AI, Human engineering, Knowledge engineering, Language Model, Large language model, large language models (LLMs), Mixed reality, Mixed reality simulation, Mixed reality simulation (MRS), Pedagogical content knowledge, Pedagogical content knowledge (PCK), Personnel training, Preservice teachers, Social studies education, Teacher training, Teacher training simulation, Teacher training simulations, Teaching, Training simulation
@article{nygren_ai_2025,
title = {AI Versus Human Feedback in Mixed Reality Simulations: Comparing LLM and Expert Mentoring in Preservice Teacher Education on Controversial Issues},
author = {T. Nygren and M. Samuelsson and P. -O. Hansson and E. Efimova and S. Bachelder},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007244772&doi=10.1007%2Fs40593-025-00484-8&partnerID=40&md5=3404a614af6fe4d4d2cb284060600e3c},
doi = {10.1007/s40593-025-00484-8},
issn = {15604306 (ISSN); 15604292 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {International Journal of Artificial Intelligence in Education},
abstract = {This study explores the potential role of AI-generated mentoring within simulated environments designed for teacher education, specifically focused on the challenges of teaching controversial issues. Using a mixed-methods approach, we empirically investigate the potential and challenges of AI-generated feedback compared to that provided by human experts when mentoring preservice teachers in the context of mixed reality simulations. Findings reveal that human experts offered more mixed and nuanced feedback than ChatGPT-4o and Perplexity, especially when identifying missed teaching opportunities and balancing classroom discussions. The AI models evaluated were publicly available pro versions of LLMs and were tested using detailed prompts and coding schemes aligned with educational theories. AI systems were not very good at identifying aspects of general, pedagogical or content knowledge based on Shulman’s theories but were still quite effective in generating feedback in line with human experts. The study highlights the promise of AI to enhance teacher training but underscores the importance of combining AI feedback with expert insights to address the complexities of real-world teaching. This research contributes to a growing understanding of AI's potential role and limitations in education. It suggests that, while AI can be valuable to scale mixed reality simulations, it should be carefully evaluated and balanced by human expertise in teacher education. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Springer},
keywords = {AI-generated feedback, Controversial issue in social study education, Controversial issues in social studies education, Curricula, Domain knowledge, Economic and social effects, Expert systems, Generative AI, Human engineering, Knowledge engineering, Language Model, Large language model, large language models (LLMs), Mixed reality, Mixed reality simulation, Mixed reality simulation (MRS), Pedagogical content knowledge, Pedagogical content knowledge (PCK), Personnel training, Preservice teachers, Social studies education, Teacher training, Teacher training simulation, Teacher training simulations, Teaching, Training simulation},
pubstate = {published},
tppubtype = {article}
}
Chang, K. -Y.; Lee, C. -F.
Enhancing Virtual Restorative Environment with Generative AI: Personalized Immersive Stress-Relief Experiences Proceedings Article
In: V.G., Duffy (Ed.): Lect. Notes Comput. Sci., pp. 132–144, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-303193501-5 (ISBN).
Abstract | Links | BibTeX | Tags: Artificial intelligence generated content, Artificial Intelligence Generated Content (AIGC), Electroencephalography, Electroencephalography (EEG), Generative AI, Immersive, Immersive environment, Mental health, Physical limitations, Restorative environment, Stress relief, Virtual reality exposure therapies, Virtual reality exposure therapy, Virtual Reality Exposure Therapy (VRET), Virtualization
@inproceedings{chang_enhancing_2025,
title = {Enhancing Virtual Restorative Environment with Generative AI: Personalized Immersive Stress-Relief Experiences},
author = {K. -Y. Chang and C. -F. Lee},
editor = {Duffy V.G.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007759157&doi=10.1007%2f978-3-031-93502-2_9&partnerID=40&md5=ee620a5da9b65e90ccb1eaa75ec8b724},
doi = {10.1007/978-3-031-93502-2_9},
isbn = {03029743 (ISSN); 978-303193501-5 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15791 LNCS},
pages = {132–144},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {In today’s fast-paced world, stress and mental health challenges are becoming more common. Restorative environments help people relax and recover emotionally, and Virtual Reality Exposure Therapy (VRET) offers a way to experience these benefits beyond physical limitations. However, most VRET applications rely on pre-designed content, limiting their adaptability to individual needs. This study explores how Generative AI can enhance VRET by creating personalized, immersive environments that better match users’ preferences and improve relaxation. To evaluate the impact of AI-generated restorative environments, we combined EEG measurements with user interviews. Thirty university students participated in the study, experiencing two different modes: static mode and walking mode. The EEG results showed an increase in Theta (θ) and High Beta (β) brain waves, suggesting a state of deep immersion accompanied by heightened cognitive engagement and mental effort. While participants found the experience enjoyable and engaging, the AI-generated environments tended to create excitement and focus rather than conventional relaxation. These findings suggest that for AI-generated environments in VRET to be more effective for stress relief, future designs should reduce cognitive load while maintaining immersion. This study provides insights into how AI can enhance relaxation experiences and introduces a new perspective on personalized digital stress-relief solutions. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.},
keywords = {Artificial intelligence generated content, Artificial Intelligence Generated Content (AIGC), Electroencephalography, Electroencephalography (EEG), Generative AI, Immersive, Immersive environment, Mental health, Physical limitations, Restorative environment, Stress relief, Virtual reality exposure therapies, Virtual reality exposure therapy, Virtual Reality Exposure Therapy (VRET), Virtualization},
pubstate = {published},
tppubtype = {inproceedings}
}
Yokoyama, N.; Kimura, R.; Nakajima, T.
ViGen: Defamiliarizing Everyday Perception for Discovering Unexpected Insights Proceedings Article
In: H., Degen; S., Ntoa (Ed.): Lect. Notes Comput. Sci., pp. 397–417, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-303193417-9 (ISBN).
Abstract | Links | BibTeX | Tags: Artful Expression, Artistic technique, Augmented Reality, Daily lives, Defamiliarization, Dynamic environments, Engineering education, Enhanced vision systems, Generative AI, generative artificial intelligence, Human augmentation, Human engineering, Human-AI Interaction, Human-artificial intelligence interaction, Semi-transparent
@inproceedings{yokoyama_vigen_2025,
title = {ViGen: Defamiliarizing Everyday Perception for Discovering Unexpected Insights},
author = {N. Yokoyama and R. Kimura and T. Nakajima},
editor = {Degen H. and Ntoa S.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007760030&doi=10.1007%2f978-3-031-93418-6_26&partnerID=40&md5=dee6f54688284313a45579aab5f934d6},
doi = {10.1007/978-3-031-93418-6_26},
isbn = {03029743 (ISSN); 978-303193417-9 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15821 LNAI},
pages = {397–417},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {This paper proposes ViGen, an Augmented Reality (AR) and Artificial Intelligence (AI)-enhanced vision system designed to facilitate defamiliarization in daily life. Humans rely on sight to gather information, think, and act, yet the act of seeing often becomes passive in daily life. Inspired by Victor Shklovsky’s concept of defamiliarization and the artistic technique of photomontage, ViGen seeks to disrupt habitual perceptions. It achieves this by overlaying semi-transparent, AI-generated images, created based on the user’s view, through an AR display. The system is evaluated by several structured interviews, in which participants experience ViGen in three different scenarios. Results indicate that AI-generated visuals effectively supported defamiliarization by transforming ordinary scenes into unfamiliar ones. However, the user’s familiarity with a place plays a significant role. Also, while the feature that adjusts the transparency of overlaid images enhances safety, its limitations in dynamic environments suggest the need for further research across diverse cultural and geographic contexts. This study demonstrates the potential of AI-augmented vision systems to stimulate new ways of seeing, offering insights for further development in visual augmentation technologies. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.},
keywords = {Artful Expression, Artistic technique, Augmented Reality, Daily lives, Defamiliarization, Dynamic environments, Engineering education, Enhanced vision systems, Generative AI, generative artificial intelligence, Human augmentation, Human engineering, Human-AI Interaction, Human-artificial intelligence interaction, Semi-transparent},
pubstate = {published},
tppubtype = {inproceedings}
}
Tortora, A.; Amaro, I.; Greca, A. Della; Barra, P.
Exploring the Role of Generative Artificial Intelligence in Virtual Reality: Opportunities and Future Perspectives Proceedings Article
In: J.Y.C., Chen; G., Fragomeni (Ed.): Lect. Notes Comput. Sci., pp. 125–142, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-303193699-9 (ISBN).
Abstract | Links | BibTeX | Tags: Ethical technology, Future perspectives, Generative AI, Image modeling, Immersive, immersive experience, Immersive Experiences, Information Management, Language Model, Personnel training, Professional training, Real- time, Sensitive data, Training design, Users' experiences, Virtual Reality
@inproceedings{tortora_exploring_2025,
title = {Exploring the Role of Generative Artificial Intelligence in Virtual Reality: Opportunities and Future Perspectives},
author = {A. Tortora and I. Amaro and A. Della Greca and P. Barra},
editor = {Chen J.Y.C. and Fragomeni G.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007788684&doi=10.1007%2f978-3-031-93700-2_9&partnerID=40&md5=7b69183bbf8172f9595f939254fb6831},
doi = {10.1007/978-3-031-93700-2_9},
isbn = {03029743 (ISSN); 978-303193699-9 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15788 LNCS},
pages = {125–142},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {In recent years, generative AI, such as language and image models, have started to revolutionize virtual reality (VR) by offering new opportunities for immersive and personalized interaction. This paper explores the potential of these Intelligent Augmentation technologies in the context of VR, analyzing how the generation of text and images in real time can enhance the user experience through dynamic and personalized environments and contents. The integration of generative AI in VR scenarios holds promise in multiple fields, including education, professional training, design, and healthcare. However, their implementation involves significant challenges, such as privacy management, data security, and ethical issues related to cognitive manipulation and representation of reality. Through an overview of current applications and future prospects, this paper highlights the crucial role of generative AI in enhancing VR, helping to outline a path for the ethical and sustainable development of these immersive technologies. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.},
keywords = {Ethical technology, Future perspectives, Generative AI, Image modeling, Immersive, immersive experience, Immersive Experiences, Information Management, Language Model, Personnel training, Professional training, Real- time, Sensitive data, Training design, Users' experiences, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Monjoree, U.; Yan, W.
Assessing AI Models' Spatial Visualization in PSVT:R and Augmented Reality: Towards Enhancing AI's Spatial Intelligence Proceedings Article
In: pp. 727–734, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331524005 (ISBN).
Abstract | Links | BibTeX | Tags: 3D modeling, Architecture engineering, Artificial intelligence, Augmented Reality, Construction science, Engineering education, Engineering science, Generative AI, generative artificial intelligence, Image processing, Intelligence models, Linear transformations, Medicine, Rotation, Rotation process, Spatial Intelligence, Spatial rotation, Spatial visualization, Three dimensional computer graphics, Three dimensional space, Visualization
@inproceedings{monjoree_assessing_2025,
title = {Assessing AI Models' Spatial Visualization in PSVT:R and Augmented Reality: Towards Enhancing AI's Spatial Intelligence},
author = {U. Monjoree and W. Yan},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105011255775&doi=10.1109%2FCAI64502.2025.00131&partnerID=40&md5=0bd551863839b3025898e55265403969},
doi = {10.1109/CAI64502.2025.00131},
isbn = {9798331524005 (ISBN)},
year = {2025},
date = {2025-01-01},
pages = {727–734},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Spatial intelligence is important in many fields, such as Architecture, Engineering, and Construction (AEC), Science, Technology, Engineering, and Mathematics (STEM), and Medicine. Understanding three-dimensional (3D) spatial rotations can involve verbal descriptions and visual or interactive examples, illustrating how objects move and change orientation in 3D space. Recent studies show that artificial intelligence (AI) with language and vision capabilities still faces limitations in spatial reasoning. In this paper, we have studied the spatial capabilities of advanced generative AI to understand the rotations of objects in 3D space utilizing its image processing and language processing features. We examined the spatial intelligence of three generative AI models (GPT-4, Gemini 1.5 Pro, and Llama 3.2) to understand the spatial rotation process with spatial rotation diagrams based on the revised Purdue Spatial Visualization Test: Visualization of Rotations (Revised PSVT:R). Furthermore, we incorporated an added layer of a coordinate system axes on Revised PSVT:R to study the variations in generative AI models' performance. We additionally examined generative AI models' understanding of 3D rotations in Augmented Reality (AR) scene images that visualize spatial rotations of a physical object in 3D space and observed an increased accuracy of generative AI models' understanding of rotations by adding additional textual information depicting the rotation process or mathematical representations of the rotation (e.g., matrices) superimposed on the object. The results indicate that while GPT-4, Gemini 1.5 Pro, and Llama 3.2 as the main current generative AI model lack the understanding of a spatial rotation process, it has the potential to understand the rotation process with additional information that can be provided by methods such as AR. AR can superimpose textual information or mathematical representations of rotations on spatial transformation diagrams and create a more intelligible input for AI to comprehend or for training AI's spatial intelligence. Furthermore, by combining the potentials in spatial intelligence of AI with AR's interactive visualization abilities, we expect to offer enhanced guidance for students' spatial learning activities. Such spatial guidance can greatly benefit understanding spatial transformations and additionally support processes like assembly, construction, manufacturing, as well as learning in AEC, STEM, and Medicine that require precise 3D spatial understanding. © 2025 Elsevier B.V., All rights reserved.},
keywords = {3D modeling, Architecture engineering, Artificial intelligence, Augmented Reality, Construction science, Engineering education, Engineering science, Generative AI, generative artificial intelligence, Image processing, Intelligence models, Linear transformations, Medicine, Rotation, Rotation process, Spatial Intelligence, Spatial rotation, Spatial visualization, Three dimensional computer graphics, Three dimensional space, Visualization},
pubstate = {published},
tppubtype = {inproceedings}
}
Kurai, R.; Hiraki, T.; Hiroi, Y.; Hirao, Y.; Perusquía-Hernández, M.; Uchiyama, H.; Kiyokawa, K.
MagicCraft: Natural Language-Driven Generation of Dynamic and Interactive 3D Objects for Commercial Metaverse Platforms Journal Article
In: IEEE Access, vol. 13, pp. 132459–132474, 2025, ISSN: 21693536 (ISSN), (Publisher: Institute of Electrical and Electronics Engineers Inc.).
Abstract | Links | BibTeX | Tags: 3D models, 3D object, 3D Object Generation, 3d-modeling, AI-Assisted Design, Artificial intelligence, Behavioral Research, Content creation, Generative AI, Immersive, Metaverse, Metaverses, Natural language processing systems, Natural languages, Object oriented programming, Three dimensional computer graphics, user experience, User interfaces
@article{kurai_magiccraft_2025,
title = {MagicCraft: Natural Language-Driven Generation of Dynamic and Interactive 3D Objects for Commercial Metaverse Platforms},
author = {R. Kurai and T. Hiraki and Y. Hiroi and Y. Hirao and M. Perusquía-Hernández and H. Uchiyama and K. Kiyokawa},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105010187256&doi=10.1109%2FACCESS.2025.3587232&partnerID=40&md5=9b7a8115c62a8f9da4956dbbbb53dc4e},
doi = {10.1109/ACCESS.2025.3587232},
issn = {21693536 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Access},
volume = {13},
pages = {132459–132474},
abstract = {Metaverse platforms are rapidly evolving to provide immersive spaces for user interaction and content creation. However, the generation of dynamic and interactive 3D objects remains challenging due to the need for advanced 3D modeling and programming skills. To address this challenge, we present MagicCraft, a system that generates functional 3D objects from natural language prompts for metaverse platforms. MagicCraft uses generative AI models to manage the entire content creation pipeline: converting user text descriptions into images, transforming images into 3D models, predicting object behavior, and assigning necessary attributes and scripts. It also provides an interactive interface for users to refine generated objects by adjusting features such as orientation, scale, seating positions, and grip points. Implemented on Cluster, a commercial metaverse platform, MagicCraft was evaluated by 7 expert CG designers and 51 general users. Results show that MagicCraft significantly reduces the time and skill required to create 3D objects. Users with no prior experience in 3D modeling or programming successfully created complex, interactive objects and deployed them in the metaverse. Expert feedback highlighted the system's potential to improve content creation workflows and support rapid prototyping. By integrating AI-generated content into metaverse platforms, MagicCraft makes 3D content creation more accessible. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Institute of Electrical and Electronics Engineers Inc.},
keywords = {3D models, 3D object, 3D Object Generation, 3d-modeling, AI-Assisted Design, Artificial intelligence, Behavioral Research, Content creation, Generative AI, Immersive, Metaverse, Metaverses, Natural language processing systems, Natural languages, Object oriented programming, Three dimensional computer graphics, user experience, User interfaces},
pubstate = {published},
tppubtype = {article}
}
Basouli, M.; Sheikhooni, S.
Application of Generative Artificial Intelligence in Simulating Virtual Tourism Experiences: Examining the Impact on Post-COVID Tourist Behavior Proceedings Article
In: pp. 593–596, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 28378296 (ISSN); 28378288 (ISSN), (Issue: 2025).
Abstract | Links | BibTeX | Tags: Advanced technology, Artificial intelligence, Behavioral Research, Commerce, Covid-19, Destination Marketing, Generative AI, Leisure industry, Literature reviews, Post-COVID, Tourism, Tourism industry, Tourist behavior, Tourist destinations, Virtual environments, Virtual Reality, Virtual Tourism, WebXR
@inproceedings{basouli_application_2025,
title = {Application of Generative Artificial Intelligence in Simulating Virtual Tourism Experiences: Examining the Impact on Post-COVID Tourist Behavior},
author = {M. Basouli and S. Sheikhooni},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105011597291&doi=10.1109%2FICWR65219.2025.11006234&partnerID=40&md5=55413d1f514a58726eed134828203915},
doi = {10.1109/ICWR65219.2025.11006234},
isbn = {28378296 (ISSN); 28378288 (ISSN)},
year = {2025},
date = {2025-01-01},
pages = {593–596},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {This article examines the application of generative artificial intelligence in simulating virtual tourism experiences and its impact on tourist behavior in the postCOVID era. Utilizing advanced technologies such as Stable Diffusion, ChatGPT, and WebXR, a system has been designed to create interactive virtual experiences of tourist destinations. A literature review reveals that both virtual experiences and generative AI hold significant potential in the tourism industry. However, few studies have explored how these two technologies can be combined and their impact on tourist behavior. Additionally, considering that generative AI, as a tool for simulating tourism experiences, significantly influences tourists' perception of destinations and attractions, travel intention, travel anxiety, and willingness to pay, studying generative AI and virtual tourism seems essential and important. Therefore, this study aims to review previous research and explore the impact of AI-based virtual tourism experiences on tourist behavior in the post-COVID era. Moreover, the study investigates factors influencing the effectiveness of these experiences and the moderating role of safety and health concerns. Results also show that perceived realism and interactivity of virtual experiences are key factors in the effectiveness of these experiences. This research provides a theoretical framework for understanding the influence of generative AI on tourism behavior and offers important practical implications for destination marketers and policymakers in the post-COVID tourism industry. © 2025 Elsevier B.V., All rights reserved.},
note = {Issue: 2025},
keywords = {Advanced technology, Artificial intelligence, Behavioral Research, Commerce, Covid-19, Destination Marketing, Generative AI, Leisure industry, Literature reviews, Post-COVID, Tourism, Tourism industry, Tourist behavior, Tourist destinations, Virtual environments, Virtual Reality, Virtual Tourism, WebXR},
pubstate = {published},
tppubtype = {inproceedings}
}