AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Tortora, A.; Amaro, I.; Greca, A. Della; Barra, P.
Exploring the Role of Generative Artificial Intelligence in Virtual Reality: Opportunities and Future Perspectives Proceedings Article
In: J.Y.C., Chen; G., Fragomeni (Ed.): Lect. Notes Comput. Sci., pp. 125–142, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 03029743 (ISSN); 978-303193699-9 (ISBN).
Abstract | Links | BibTeX | Tags: Ethical technology, Future perspectives, Generative AI, Image modeling, Immersive, immersive experience, Immersive Experiences, Information Management, Language Model, Personnel training, Professional training, Real- time, Sensitive data, Training design, Users' experiences, Virtual Reality
@inproceedings{tortora_exploring_2025,
title = {Exploring the Role of Generative Artificial Intelligence in Virtual Reality: Opportunities and Future Perspectives},
author = {A. Tortora and I. Amaro and A. Della Greca and P. Barra},
editor = {Chen J.Y.C. and Fragomeni G.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007788684&doi=10.1007%2f978-3-031-93700-2_9&partnerID=40&md5=7b69183bbf8172f9595f939254fb6831},
doi = {10.1007/978-3-031-93700-2_9},
isbn = {03029743 (ISSN); 978-303193699-9 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Comput. Sci.},
volume = {15788 LNCS},
pages = {125–142},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {In recent years, generative AI, such as language and image models, have started to revolutionize virtual reality (VR) by offering new opportunities for immersive and personalized interaction. This paper explores the potential of these Intelligent Augmentation technologies in the context of VR, analyzing how the generation of text and images in real time can enhance the user experience through dynamic and personalized environments and contents. The integration of generative AI in VR scenarios holds promise in multiple fields, including education, professional training, design, and healthcare. However, their implementation involves significant challenges, such as privacy management, data security, and ethical issues related to cognitive manipulation and representation of reality. Through an overview of current applications and future prospects, this paper highlights the crucial role of generative AI in enhancing VR, helping to outline a path for the ethical and sustainable development of these immersive technologies. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.},
keywords = {Ethical technology, Future perspectives, Generative AI, Image modeling, Immersive, immersive experience, Immersive Experiences, Information Management, Language Model, Personnel training, Professional training, Real- time, Sensitive data, Training design, Users' experiences, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Ly, D. -N.; Do, H. -N.; Tran, M. -T.; Le, K. -D.
Evaluation of AI-Based Assistant Representations on User Interaction in Virtual Explorations Proceedings Article
In: W., Buntine; M., Fjeld; T., Tran; M.-T., Tran; B., Huynh Thi Thanh; T., Miyoshi (Ed.): Commun. Comput. Info. Sci., pp. 323–337, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 18650929 (ISSN); 978-981964287-8 (ISBN).
Abstract | Links | BibTeX | Tags: 360-degree Video, AI-Based Assistant, Cultural heritages, Cultural science, Multiusers, Single users, Social interactions, Three dimensional computer graphics, User interaction, Users' experiences, Virtual environments, Virtual Exploration, Virtual Reality, Virtualization
@inproceedings{ly_evaluation_2025,
title = {Evaluation of AI-Based Assistant Representations on User Interaction in Virtual Explorations},
author = {D. -N. Ly and H. -N. Do and M. -T. Tran and K. -D. Le},
editor = {Buntine W. and Fjeld M. and Tran T. and Tran M.-T. and Huynh Thi Thanh B. and Miyoshi T.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105004253350&doi=10.1007%2f978-981-96-4288-5_26&partnerID=40&md5=5f0a8c1e356cd3bdd4dda7f96f272154},
doi = {10.1007/978-981-96-4288-5_26},
isbn = {18650929 (ISSN); 978-981964287-8 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Commun. Comput. Info. Sci.},
volume = {2352 CCIS},
pages = {323–337},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {Exploration activities, such as tourism, cultural heritage, and science, enhance knowledge and understanding. The rise of 360-degree videos allows users to explore cultural landmarks and destinations remotely. While multi-user VR environments encourage collaboration, single-user experiences often lack social interaction. Generative AI, particularly Large Language Models (LLMs), offer a way to improve single-user VR exploration through AI-driven virtual assistants, acting as tour guides or storytellers. However, it’s uncertain whether these assistants require a visual presence, and if so, what form it should take. To investigate this, we developed an AI-based assistant in three different forms: a voice-only avatar, a 3D human-sized avatar, and a mini-hologram avatar, and conducted a user study to evaluate their impact on user experience. The study, which involved 12 participants, found that the visual embodiments significantly reduce feelings of being alone, with distinct user preferences between the Human-sized avatar and the Mini hologram. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.},
keywords = {360-degree Video, AI-Based Assistant, Cultural heritages, Cultural science, Multiusers, Single users, Social interactions, Three dimensional computer graphics, User interaction, Users' experiences, Virtual environments, Virtual Exploration, Virtual Reality, Virtualization},
pubstate = {published},
tppubtype = {inproceedings}
}
Li, H.; Wang, Z.; Liang, W.; Wang, Y.
X’s Day: Personality-Driven Virtual Human Behavior Generation Journal Article
In: IEEE Transactions on Visualization and Computer Graphics, vol. 31, no. 5, pp. 3514–3524, 2025, ISSN: 10772626 (ISSN).
Abstract | Links | BibTeX | Tags: adult, Augmented Reality, Behavior Generation, Chatbots, Computer graphics, computer interface, Contextual Scene, female, human, Human behaviors, Humans, Long-term behavior, male, Novel task, Personality, Personality traits, Personality-driven Behavior, physiology, Social behavior, User-Computer Interface, Users' experiences, Virtual agent, Virtual environments, Virtual humans, Virtual Reality, Young Adult
@article{li_xs_2025,
title = {X’s Day: Personality-Driven Virtual Human Behavior Generation},
author = {H. Li and Z. Wang and W. Liang and Y. Wang},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105003864932&doi=10.1109%2fTVCG.2025.3549574&partnerID=40&md5=a865bbd2b0fa964a4f0f4190955dc787},
doi = {10.1109/TVCG.2025.3549574},
issn = {10772626 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Transactions on Visualization and Computer Graphics},
volume = {31},
number = {5},
pages = {3514–3524},
abstract = {Developing convincing and realistic virtual human behavior is essential for enhancing user experiences in virtual reality (VR) and augmented reality (AR) settings. This paper introduces a novel task focused on generating long-term behaviors for virtual agents, guided by specific personality traits and contextual elements within 3D environments. We present a comprehensive framework capable of autonomously producing daily activities autoregressively. By modeling the intricate connections between personality characteristics and observable activities, we establish a hierarchical structure of Needs, Task, and Activity levels. Integrating a Behavior Planner and a World State module allows for the dynamic sampling of behaviors using large language models (LLMs), ensuring that generated activities remain relevant and responsive to environmental changes. Extensive experiments validate the effectiveness and adaptability of our approach across diverse scenarios. This research makes a significant contribution to the field by establishing a new paradigm for personalized and context-aware interactions with virtual humans, ultimately enhancing user engagement in immersive applications. Our project website is at: https://behavior.agent-x.cn/. © 2025 IEEE. All rights reserved,},
keywords = {adult, Augmented Reality, Behavior Generation, Chatbots, Computer graphics, computer interface, Contextual Scene, female, human, Human behaviors, Humans, Long-term behavior, male, Novel task, Personality, Personality traits, Personality-driven Behavior, physiology, Social behavior, User-Computer Interface, Users' experiences, Virtual agent, Virtual environments, Virtual humans, Virtual Reality, Young Adult},
pubstate = {published},
tppubtype = {article}
}
Gatti, E.; Giunchi, D.; Numan, N.; Steed, A.
Around the Virtual Campfire: Early UX Insights into AI-Generated Stories in VR Proceedings Article
In: Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR, pp. 136–141, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 979-833152157-8 (ISBN).
Abstract | Links | BibTeX | Tags: Generative AI, Images synthesis, Immersive, Interactive Environments, Language Model, Large language model, Storytelling, User input, User study, Users' experiences, Virtual environments, VR
@inproceedings{gatti_around_2025,
title = {Around the Virtual Campfire: Early UX Insights into AI-Generated Stories in VR},
author = {E. Gatti and D. Giunchi and N. Numan and A. Steed},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105000263662&doi=10.1109%2fAIxVR63409.2025.00027&partnerID=40&md5=cd804d892d45554e936d0221508b3447},
doi = {10.1109/AIxVR63409.2025.00027},
isbn = {979-833152157-8 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR},
pages = {136–141},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Virtual Reality (VR) presents an immersive platform for storytelling, allowing narratives to unfold in highly engaging, interactive environments. Leveraging AI capabilities and image synthesis offers new possibilities for creating scalable, generative VR content. In this work, we use an LLM-driven VR storytelling platform to explore how AI-generated visuals and narrative elements impact the user experience in VR storytelling. Previously, we presented AIsop, a system to integrate LLM-generated text and images and TTS audio into a storytelling experience, where the narrative unfolds based on user input. In this paper, we present two user studies focusing on how AI-generated visuals influence narrative perception and the overall VR experience. Our findings highlight the positive impact of AI-generated pictorial content on the storytelling experience, highlighting areas for enhancement and further research in interactive narrative design. © 2025 IEEE.},
keywords = {Generative AI, Images synthesis, Immersive, Interactive Environments, Language Model, Large language model, Storytelling, User input, User study, Users' experiences, Virtual environments, VR},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhang, H.; Zha, S.; Cai, J.; Wohn, D. Y.; Carroll, J. M.
Generative AI in Virtual Reality Communities: A Preliminary Analysis of the VRChat Discord Community Proceedings Article
In: Conf Hum Fact Comput Syst Proc, Association for Computing Machinery, 2025, ISBN: 979-840071395-8 (ISBN).
Abstract | Links | BibTeX | Tags: AI assistant, AI Technologies, Coding framework, Ethical technology, Human-ai collaboration, Immersive, On-line communities, online community, Preliminary analysis, Property, Qualitative analysis, user experience, Users' experiences
@inproceedings{zhang_generative_2025,
title = {Generative AI in Virtual Reality Communities: A Preliminary Analysis of the VRChat Discord Community},
author = {H. Zhang and S. Zha and J. Cai and D. Y. Wohn and J. M. Carroll},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005770564&doi=10.1145%2f3706599.3720120&partnerID=40&md5=9bdfc4e70b9b361d67791932f5a56413},
doi = {10.1145/3706599.3720120},
isbn = {979-840071395-8 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Conf Hum Fact Comput Syst Proc},
publisher = {Association for Computing Machinery},
abstract = {As immersive social platforms like VRChat increasingly adopt generative AI (GenAI) technologies, it becomes critical to understand how community members perceive, negotiate, and utilize these tools. In this preliminary study, we conducted a qualitative analysis of VRChat-related Discord discussions, employing a deductive coding framework to identify key themes related to AI-assisted content creation, intellectual property disputes, and evolving community norms. Our findings offer preliminary insights into the complex interplay between the community’s enthusiasm for AI-driven creativity and deep-rooted ethical and legal concerns. Users weigh issues of fair use, data ethics, intellectual property, and the role of community governance in establishing trust. By highlighting the tensions and trade-offs as users embrace new creative opportunities while seeking transparency, fair attribution, and equitable policies, this research offers valuable insights for designers, platform administrators, and policymakers aiming to foster responsible, inclusive, and ethically sound AI integration in future immersive virtual environments. © 2025 Copyright held by the owner/author(s).},
keywords = {AI assistant, AI Technologies, Coding framework, Ethical technology, Human-ai collaboration, Immersive, On-line communities, online community, Preliminary analysis, Property, Qualitative analysis, user experience, Users' experiences},
pubstate = {published},
tppubtype = {inproceedings}
}
Xing, Y.; Liu, Q.; Wang, J.; Gómez-Zará, D.
sMoRe: Spatial Mapping and Object Rendering Environment Proceedings Article
In: Int Conf Intell User Interfaces Proc IUI, pp. 115–119, Association for Computing Machinery, 2025, ISBN: 979-840071409-2 (ISBN).
Abstract | Links | BibTeX | Tags: Generative adversarial networks, Generative AI, Language Model, Large language model, large language models, Mapping, Mixed reality, Mixed-reality environment, Object rendering, Rendering (computer graphics), Space Manipulation, Spatial mapping, Spatial objects, Users' experiences, Virtual environments, Virtual objects
@inproceedings{xing_smore_2025,
title = {sMoRe: Spatial Mapping and Object Rendering Environment},
author = {Y. Xing and Q. Liu and J. Wang and D. Gómez-Zará},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105001670668&doi=10.1145%2f3708557.3716337&partnerID=40&md5=8ef4c5c4ef2b3ee30d00e4b8d19d19b8},
doi = {10.1145/3708557.3716337},
isbn = {979-840071409-2 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Int Conf Intell User Interfaces Proc IUI},
pages = {115–119},
publisher = {Association for Computing Machinery},
abstract = {In mixed reality (MR) environments, understanding space and creating virtual objects is crucial to providing an intuitive user experience. This paper introduces sMoRe (Spatial Mapping and Object Rendering Environment), an MR application that combines Generative AI (GenAI) to assist users in creating, placing, and managing virtual objects within physical spaces. sMoRe allows users to use voice or typed text commands to create and place virtual objects using GenAI while specifying spatial constraints. The system employs Large Language Models (LLMs) to interpret users’ commands, analyze the current scene, and identify optimal locations. Additionally, sMoRe integrates a text-to-3D generative model to dynamically create 3D objects based on users’ descriptions. Our user study demonstrates the effectiveness of sMoRe in enhancing user comprehension, interaction, and organization of the MR environment. © 2025 Copyright held by the owner/author(s).},
keywords = {Generative adversarial networks, Generative AI, Language Model, Large language model, large language models, Mapping, Mixed reality, Mixed-reality environment, Object rendering, Rendering (computer graphics), Space Manipulation, Spatial mapping, Spatial objects, Users' experiences, Virtual environments, Virtual objects},
pubstate = {published},
tppubtype = {inproceedings}
}
2024
Liu, X. B.; Li, J. N.; Kim, D.; Chen, X.; Du, R.
Human I/O: Towards a Unified Approach to Detecting Situational Impairments Proceedings Article
In: Conf Hum Fact Comput Syst Proc, Association for Computing Machinery, 2024, ISBN: 979-840070330-0 (ISBN).
Abstract | Links | BibTeX | Tags: Augmented Reality, Computational Linguistics, Context awareness, Context- awareness, In contexts, Language Model, Large language model, large language models, Multi tasking, Multimodal sensing, Situational impairment, situational impairments, Specific tasks, Unified approach, User interfaces, Users' experiences, Video recording
@inproceedings{liu_human_2024,
title = {Human I/O: Towards a Unified Approach to Detecting Situational Impairments},
author = {X. B. Liu and J. N. Li and D. Kim and X. Chen and R. Du},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85194891045&doi=10.1145%2f3613904.3642065&partnerID=40&md5=01b3ece7c1bc2a758126fce88a15d14e},
doi = {10.1145/3613904.3642065},
isbn = {979-840070330-0 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Conf Hum Fact Comput Syst Proc},
publisher = {Association for Computing Machinery},
abstract = {Situationally Induced Impairments and Disabilities (SIIDs) can significantly hinder user experience in contexts such as poor lighting, noise, and multi-tasking. While prior research has introduced algorithms and systems to address these impairments, they predominantly cater to specific tasks or environments and fail to accommodate the diverse and dynamic nature of SIIDs. We introduce Human I/O, a unified approach to detecting a wide range of SIIDs by gauging the availability of human input/output channels. Leveraging egocentric vision, multimodal sensing and reasoning with large language models, Human I/O achieves a 0.22 mean absolute error and a 82% accuracy in availability prediction across 60 in-the-wild egocentric video recordings in 32 different scenarios. Furthermore, while the core focus of our work is on the detection of SIIDs rather than the creation of adaptive user interfaces, we showcase the efficacy of our prototype via a user study with 10 participants. Findings suggest that Human I/O significantly reduces effort and improves user experience in the presence of SIIDs, paving the way for more adaptive and accessible interactive systems in the future. © 2024 Copyright held by the owner/author(s)},
keywords = {Augmented Reality, Computational Linguistics, Context awareness, Context- awareness, In contexts, Language Model, Large language model, large language models, Multi tasking, Multimodal sensing, Situational impairment, situational impairments, Specific tasks, Unified approach, User interfaces, Users' experiences, Video recording},
pubstate = {published},
tppubtype = {inproceedings}
}
Haramina, E.; Paladin, M.; Petričušić, Z.; Posarić, F.; Drobnjak, A.; Botički, I.
Learning Algorithms Concepts in a Virtual Reality Escape Room Proceedings Article
In: S., Babic; Z., Car; M., Cicin-Sain; D., Cisic; P., Ergovic; T.G., Grbac; V., Gradisnik; S., Gros; A., Jokic; A., Jovic; D., Jurekovic; T., Katulic; M., Koricic; V., Mornar; J., Petrovic; K., Skala; D., Skvorc; V., Sruk; M., Svaco; E., Tijan; N., Vrcek; B., Vrdoljak (Ed.): ICT Electron. Conv., MIPRO - Proc., pp. 2057–2062, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-835038249-5 (ISBN).
Abstract | Links | BibTeX | Tags: Artificial intelligence, Computational complexity, Computer generated three dimensional environment, E-Learning, Education, Escape room, Extended reality, generative artificial intelligence, Learn+, Learning, Learning algorithms, Learning systems, Puzzle, puzzles, user experience, User study, User testing, Users' experiences, Virtual Reality
@inproceedings{haramina_learning_2024,
title = {Learning Algorithms Concepts in a Virtual Reality Escape Room},
author = {E. Haramina and M. Paladin and Z. Petričušić and F. Posarić and A. Drobnjak and I. Botički},
editor = {Babic S. and Car Z. and Cicin-Sain M. and Cisic D. and Ergovic P. and Grbac T.G. and Gradisnik V. and Gros S. and Jokic A. and Jovic A. and Jurekovic D. and Katulic T. and Koricic M. and Mornar V. and Petrovic J. and Skala K. and Skvorc D. and Sruk V. and Svaco M. and Tijan E. and Vrcek N. and Vrdoljak B.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85198221737&doi=10.1109%2fMIPRO60963.2024.10569447&partnerID=40&md5=8a94d92d989d1f0feb84eba890945de8},
doi = {10.1109/MIPRO60963.2024.10569447},
isbn = {979-835038249-5 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {ICT Electron. Conv., MIPRO - Proc.},
pages = {2057–2062},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Although the standard way to learn algorithms is by coding, learning through games is another way to obtain knowledge while having fun. Virtual reality is a computer-generated three-dimensional environment in which the player is fully immersed by having external stimuli mostly blocked out. In the game presented in this paper, players are enhancing their algorithms skills by playing an escape room game. The goal is to complete the room within the designated time by solving puzzles. The puzzles change for every playthrough with the use of generative artificial intelligence to provide every player with a unique experience. There are multiple types of puzzles such as. time complexity, sorting algorithms, searching algorithms, and code execution. The paper presents the results of a study indicating students' preference for learning through gaming as a method of acquiring algorithms knowledge. © 2024 IEEE.},
keywords = {Artificial intelligence, Computational complexity, Computer generated three dimensional environment, E-Learning, Education, Escape room, Extended reality, generative artificial intelligence, Learn+, Learning, Learning algorithms, Learning systems, Puzzle, puzzles, user experience, User study, User testing, Users' experiences, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Asra, S. A.; Wickramarathne, J.
Artificial Intelligence (AI) in Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) Experiences: Enhancing Immersion and Interaction for User Experiences Proceedings Article
In: B., Luo; S.K., Sahoo; Y.H., Lee; C.H.T., Lee; M., Ong; A., Alphones (Ed.): IEEE Reg 10 Annu Int Conf Proc TENCON, pp. 1700–1705, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 21593442 (ISSN); 979-835035082-1 (ISBN).
Abstract | Links | BibTeX | Tags: AI, AR, Emersion experience, Immersive augmented realities, Mixed reality, MR, Primary sources, Real-world, Secondary sources, Training simulation, Users' experiences, Video game simulation, Video training, Virtual environments, VR
@inproceedings{asra_artificial_2024,
title = {Artificial Intelligence (AI) in Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) Experiences: Enhancing Immersion and Interaction for User Experiences},
author = {S. A. Asra and J. Wickramarathne},
editor = {Luo B. and Sahoo S.K. and Lee Y.H. and Lee C.H.T. and Ong M. and Alphones A.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105000443498&doi=10.1109%2fTENCON61640.2024.10902724&partnerID=40&md5=2ff92b5e2529ae7fe797cd8026e8065d},
doi = {10.1109/TENCON61640.2024.10902724},
isbn = {21593442 (ISSN); 979-835035082-1 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {IEEE Reg 10 Annu Int Conf Proc TENCON},
pages = {1700–1705},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {The utilisation of Artificial Intelligence (AI) generated material is one of the most fascinating advancements in the rapidly growing fields of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). Two examples of how AI-generated material is revolutionising how we interact with AR, VR and MR are video games and training simulations. In this essay, we'll examine the intriguing potential of AI-generated content and how it's being used to the development of hybrid real-world/virtual experiences. Using this strategy, we acquired the information from primary and secondary sources. We surveyed AR, VR, and MR users to compile the data for the primary source. Then, utilising published papers as a secondary source, information was gathered. By elucidating the concept of context immersion, this research can lay the foundation for the advancement of information regarding immersive AR, VR, and MR contexts. We are able to offer recommendations for overcoming the weak parts and strengthening the good ones based on the questionnaire survey findings. © 2024 IEEE.},
keywords = {AI, AR, Emersion experience, Immersive augmented realities, Mixed reality, MR, Primary sources, Real-world, Secondary sources, Training simulation, Users' experiences, Video game simulation, Video training, Virtual environments, VR},
pubstate = {published},
tppubtype = {inproceedings}
}
Bayat, R.; Maio, E. De; Fiorenza, J.; Migliorini, M.; Lamberti, F.
Exploring Methodologies to Create a Unified VR User-Experience in the Field of Virtual Museum Experiences Proceedings Article
In: IEEE Gaming, Entertain., Media Conf., GEM, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-835037453-7 (ISBN).
Abstract | Links | BibTeX | Tags: Cultural heritages, Meta-museum, Meta-museums, Metaverse, Metaverses, Research frontiers, Research opportunities, user experience, User experience design, User interfaces, User-Experience Design, Users' experiences, Virtual avatar, Virtual machine, Virtual museum, Virtual Reality, Virtual reality experiences
@inproceedings{bayat_exploring_2024,
title = {Exploring Methodologies to Create a Unified VR User-Experience in the Field of Virtual Museum Experiences},
author = {R. Bayat and E. De Maio and J. Fiorenza and M. Migliorini and F. Lamberti},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85199517817&doi=10.1109%2fGEM61861.2024.10585452&partnerID=40&md5=203c7b426a11144acc7a2fedbbac6a98},
doi = {10.1109/GEM61861.2024.10585452},
isbn = {979-835037453-7 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {IEEE Gaming, Entertain., Media Conf., GEM},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {The emergence of Virtual Reality (VR) and meta-verse have opened doors to new research opportunities and frontiers in User Experience (UX). Within the cultural heritage domain, one of the key concepts is that of the Virtual Museums (VMs), whose definition has been extended through time by many research works and applications. However, most of the studies performed so far focused on only one application, and studied its UX without taking into account the experience with other VR experiences possibly available in the VM. The purpose of this work is to give a contribution for an optimal design to create a unified UX across multiple VR experiences. More specifically, the research included the development of two applications, respectively a VM in a metaverse platform and a virtual learning workshop as an individual application. With this premise, the study will also consider two fundamental elements for an effective UX design: a Virtual Environment (VE) and an Intelligent Virtual Avatar (IVA). In particular, the latest was developed following current trends about generative AI, integrating an IVA powered by a Large Language Model (LLM). © 2024 IEEE.},
keywords = {Cultural heritages, Meta-museum, Meta-museums, Metaverse, Metaverses, Research frontiers, Research opportunities, user experience, User experience design, User interfaces, User-Experience Design, Users' experiences, Virtual avatar, Virtual machine, Virtual museum, Virtual Reality, Virtual reality experiences},
pubstate = {published},
tppubtype = {inproceedings}
}
Su, X.; Koh, E.; Xiao, C.
SonifyAR: Context-Aware Sound Effect Generation in Augmented Reality Proceedings Article
In: Conf Hum Fact Comput Syst Proc, Association for Computing Machinery, 2024, ISBN: 979-840070331-7 (ISBN).
Abstract | Links | BibTeX | Tags: 'current, Augmented Reality, Augmented reality authoring, Authoring Tool, Context information, Context-Aware, Immersiveness, Iterative methods, Mixed reality, Real-world, Sound, Sound effects, User interfaces, Users' experiences
@inproceedings{su_sonifyar_2024,
title = {SonifyAR: Context-Aware Sound Effect Generation in Augmented Reality},
author = {X. Su and E. Koh and C. Xiao},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85194146678&doi=10.1145%2f3613905.3650927&partnerID=40&md5=fa2154e1ffdd5339696ccb39584dee16},
doi = {10.1145/3613905.3650927},
isbn = {979-840070331-7 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Conf Hum Fact Comput Syst Proc},
publisher = {Association for Computing Machinery},
abstract = {Sound plays crucial roles in enhancing user experience and immersiveness in Augmented Reality (AR). However, current AR authoring platforms lack support for creating sound effects that harmonize with both the virtual and the real-world contexts. In this work, we present SonifyAR, a novel system for generating context-aware sound effects in AR experiences. SonifyAR implements a Programming by Demonstration (PbD) AR authoring pipeline. We utilize computer vision models and a large language model (LLM) to generate text descriptions that incorporate context information of user, virtual object and real world environment. This context information is then used to acquire sound effects with recommendation, generation, and retrieval methods. The acquired sound effects can be tested and assigned to AR events. Our user interface also provides the flexibility to allow users to iteratively explore and fine-tune the sound effects. We conducted a preliminary user study to demonstrate the effectiveness and usability of our system. © 2024 Association for Computing Machinery. All rights reserved.},
keywords = {'current, Augmented Reality, Augmented reality authoring, Authoring Tool, Context information, Context-Aware, Immersiveness, Iterative methods, Mixed reality, Real-world, Sound, Sound effects, User interfaces, Users' experiences},
pubstate = {published},
tppubtype = {inproceedings}
}
Min, Y.; Jeong, J. -W.
Public Speaking Q&A Practice with LLM-Generated Personas in Virtual Reality Proceedings Article
In: U., Eck; M., Sra; J., Stefanucci; M., Sugimoto; M., Tatzgern; I., Williams (Ed.): Proc. - IEEE Int. Symp. Mixed Augment. Real. Adjunct, ISMAR-Adjunct, pp. 493–496, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-833150691-9 (ISBN).
Abstract | Links | BibTeX | Tags: Digital elevation model, Economic and social effects, Language Model, Large language model-based persona generation, LLM-based Persona Generation, Model-based OPC, Personnel training, Power, Practice systems, Presentation Anxiety, Public speaking, Q&A practice, user experience, Users' experiences, Virtual environments, Virtual Reality, VR training
@inproceedings{min_public_2024,
title = {Public Speaking Q&A Practice with LLM-Generated Personas in Virtual Reality},
author = {Y. Min and J. -W. Jeong},
editor = {Eck U. and Sra M. and Stefanucci J. and Sugimoto M. and Tatzgern M. and Williams I.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85214393734&doi=10.1109%2fISMAR-Adjunct64951.2024.00143&partnerID=40&md5=992d9599bde26f9d57d549639869d124},
doi = {10.1109/ISMAR-Adjunct64951.2024.00143},
isbn = {979-833150691-9 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. - IEEE Int. Symp. Mixed Augment. Real. Adjunct, ISMAR-Adjunct},
pages = {493–496},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {This paper introduces a novel VR-based Q&A practice system that harnesses the power of Large Language Models (LLMs). We support Q&A practice for upcoming public speaking by providing an immersive VR training environment populated with LLM-generated audiences, each capable of posing diverse and realistic questions based on different personas. We conducted a pilot user study involving 20 participants who engaged in VR-based Q&A practice sessions. The sessions featured a variety of questions regarding presentation material provided by the participants, all of which were generated by LLM-based personas. Through post-surveys and interviews, we evaluated the effectiveness of the proposed method. The participants valued the system for engagement and focus while also identifying several areas for improvement. Our study demonstrated the potential of integrating VR and LLMs to create a powerful, immersive tool for Q&A practice. © 2024 IEEE.},
keywords = {Digital elevation model, Economic and social effects, Language Model, Large language model-based persona generation, LLM-based Persona Generation, Model-based OPC, Personnel training, Power, Practice systems, Presentation Anxiety, Public speaking, Q&A practice, user experience, Users' experiences, Virtual environments, Virtual Reality, VR training},
pubstate = {published},
tppubtype = {inproceedings}
}
Wong, A.; Zhao, Y.; Baghaei, N.
Effects of Customizable Intelligent VR Shopping Assistant on Shopping for Stress Relief Proceedings Article
In: U., Eck; M., Sra; J., Stefanucci; M., Sugimoto; M., Tatzgern; I., Williams (Ed.): Proc. - IEEE Int. Symp. Mixed Augment. Real. Adjunct, ISMAR-Adjunct, pp. 304–308, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-833150691-9 (ISBN).
Abstract | Links | BibTeX | Tags: Customisation, Customizable, generative artificial intelligence, Head-mounted-displays, Helmet mounted displays, Immersive, Mental health, mHealth, Realistic rendering, stress, Stress relief, Users' experiences, Virtual environments, Virtual Reality, Virtual shopping, Virtual shopping assistant
@inproceedings{wong_effects_2024,
title = {Effects of Customizable Intelligent VR Shopping Assistant on Shopping for Stress Relief},
author = {A. Wong and Y. Zhao and N. Baghaei},
editor = {Eck U. and Sra M. and Stefanucci J. and Sugimoto M. and Tatzgern M. and Williams I.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85214427097&doi=10.1109%2fISMAR-Adjunct64951.2024.00069&partnerID=40&md5=1530bc0a2139fb33b1a2917c3eb31296},
doi = {10.1109/ISMAR-Adjunct64951.2024.00069},
isbn = {979-833150691-9 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. - IEEE Int. Symp. Mixed Augment. Real. Adjunct, ISMAR-Adjunct},
pages = {304–308},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Shopping has long since been a method of distraction and relieving stress. Virtual Reality (VR) effectively simulates immersive experiences, including shopping through head-mounted displays (HMD), which create an environment through realistic renderings and sounds. Current studies in VR have shown that assistants can support users by reducing stress, indicating their ability to improve mental health within VR. Customization and personalization have also been used to enhance the user experience with users preferring the tailored experience and leading to a greater sense of immersion. There is a gap in knowledge on the effects of customization on a VR assistant's ability to reduce stress within the VR retailing space. This research aims to identify relationships between customization and shopping assistants within VR to better understand its effects on the user experience. Understanding this will help the development of VR assistants for mental health and consumer-ready VR shopping experiences. © 2024 IEEE.},
keywords = {Customisation, Customizable, generative artificial intelligence, Head-mounted-displays, Helmet mounted displays, Immersive, Mental health, mHealth, Realistic rendering, stress, Stress relief, Users' experiences, Virtual environments, Virtual Reality, Virtual shopping, Virtual shopping assistant},
pubstate = {published},
tppubtype = {inproceedings}
}
Tang, Y.; Situ, J.; Huang, Y.
Beyond User Experience: Technical and Contextual Metrics for Large Language Models in Extended Reality Proceedings Article
In: UbiComp Companion - Companion ACM Int. Jt. Conf. Pervasive Ubiquitous Comput., pp. 640–643, Association for Computing Machinery, Inc, 2024, ISBN: 979-840071058-2 (ISBN).
Abstract | Links | BibTeX | Tags: Augmented Reality, Computer simulation languages, Evaluation Metrics, Extended reality, Language Model, Large language model, large language models, Mixed reality, Modeling performance, Natural language processing systems, Physical world, Spatial computing, spatial data, user experience, Users' experiences, Virtual environments, Virtual Reality
@inproceedings{tang_beyond_2024,
title = {Beyond User Experience: Technical and Contextual Metrics for Large Language Models in Extended Reality},
author = {Y. Tang and J. Situ and Y. Huang},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85206203437&doi=10.1145%2f3675094.3678995&partnerID=40&md5=3fb337872b483a163bfbea038f1baffe},
doi = {10.1145/3675094.3678995},
isbn = {979-840071058-2 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {UbiComp Companion - Companion ACM Int. Jt. Conf. Pervasive Ubiquitous Comput.},
pages = {640–643},
publisher = {Association for Computing Machinery, Inc},
abstract = {Spatial Computing involves interacting with the physical world through spatial data manipulation, closely linked with Extended Reality (XR), which includes Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). Large Language Models (LLMs) significantly enhance XR applications by improving user interactions through natural language understanding and content generation. Typical evaluations of these applications focus on user experience (UX) metrics, such as task performance, user satisfaction, and psychological assessments, but often neglect the technical performance of the LLMs themselves. This paper identifies significant gaps in current evaluation practices for LLMs within XR environments, attributing them to the novelty of the field, the complexity of spatial contexts, and the multimodal nature of interactions in XR. To address these gaps, the paper proposes specific metrics tailored to evaluate LLM performance in XR contexts, including spatial contextual awareness, coherence, proactivity, multimodal integration, hallucination, and question-answering accuracy. These proposed metrics aim to complement existing UX evaluations, providing a comprehensive assessment framework that captures both the technical and user-centric aspects of LLM performance in XR applications. The conclusion underscores the necessity for a dual-focused approach that combines technical and UX metrics to ensure effective and user-friendly LLM-integrated XR systems. © 2024 Copyright held by the owner/author(s).},
keywords = {Augmented Reality, Computer simulation languages, Evaluation Metrics, Extended reality, Language Model, Large language model, large language models, Mixed reality, Modeling performance, Natural language processing systems, Physical world, Spatial computing, spatial data, user experience, Users' experiences, Virtual environments, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Constantinides, N.; Constantinides, A.; Koukopoulos, D.; Fidas, C.; Belk, M.
CulturAI: Exploring Mixed Reality Art Exhibitions with Large Language Models for Personalized Immersive Experiences Proceedings Article
In: UMAP - Adjun. Proc. ACM Conf. User Model., Adapt. Personal., pp. 102–105, Association for Computing Machinery, Inc, 2024, ISBN: 979-840070466-6 (ISBN).
Abstract | Links | BibTeX | Tags: Computational Linguistics, Immersive, Language Model, Large language model, large language models, Mixed reality, Mixed reality art, Mixed reality technologies, Model-based OPC, User Experience Evaluation, User experience evaluations, User interfaces, User study, Users' experiences
@inproceedings{constantinides_culturai_2024,
title = {CulturAI: Exploring Mixed Reality Art Exhibitions with Large Language Models for Personalized Immersive Experiences},
author = {N. Constantinides and A. Constantinides and D. Koukopoulos and C. Fidas and M. Belk},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85198910809&doi=10.1145%2f3631700.3664874&partnerID=40&md5=952d82629a3fcdc6e2a960dd532b2c09},
doi = {10.1145/3631700.3664874},
isbn = {979-840070466-6 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {UMAP - Adjun. Proc. ACM Conf. User Model., Adapt. Personal.},
pages = {102–105},
publisher = {Association for Computing Machinery, Inc},
abstract = {Mixed Reality (MR) technologies have transformed the way in which we interact and engage with digital content, offering immersive experiences that blend the physical and virtual worlds. Over the past years, there has been increasing interest in employing Artificial Intelligence (AI) technologies to improve user experience and trustworthiness in cultural contexts. However, the integration of Large Language Models (LLMs) into MR applications within the Cultural Heritage (CH) domain is relatively underexplored. In this work, we present an investigation into the integration of LLMs within MR environments, focusing on the context of virtual art exhibitions. We implemented a HoloLens MR application, which enables users to explore artworks while interacting with an LLM through voice. To evaluate the user experience and perceived trustworthiness of individuals engaging with an LLM-based virtual art guide, we adopted a between-subject study design, in which participants were randomly assigned to either the LLM-based version or a control group using conventional interaction methods. The LLM-based version allows users to pose inquiries about the artwork displayed, ranging from details about the creator to information about the artwork's origin and historical significance. This paper presents the technical aspects of integrating LLMs within MR applications and evaluates the user experience and perceived trustworthiness of this approach in enhancing the exploration of virtual art exhibitions. Results of an initial evaluation provide evidence about the positive aspect of integrating LLMs in MR applications. Findings of this work contribute to the advancement of MR technologies for the development of future interactive personalized art experiences. © 2024 Owner/Author.},
keywords = {Computational Linguistics, Immersive, Language Model, Large language model, large language models, Mixed reality, Mixed reality art, Mixed reality technologies, Model-based OPC, User Experience Evaluation, User experience evaluations, User interfaces, User study, Users' experiences},
pubstate = {published},
tppubtype = {inproceedings}
}