AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Koizumi, M.; Ohsuga, M.; Corchado, J. M.
Development and Assessment of a System to Help Students Improve Self-compassion Proceedings Article
In: R., Chinthaginjala; P., Sitek; N., Min-Allah; K., Matsui; S., Ossowski; S., Rodríguez (Ed.): Lect. Notes Networks Syst., pp. 43–52, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 23673370 (ISSN); 978-303182072-4 (ISBN).
Abstract | Links | BibTeX | Tags: Avatar, Generative adversarial networks, Generative AI, Health issues, Mental health, Self-compassion, Students, Training program, University students, Virtual avatar, Virtual environments, Virtual Reality, Virtual Space, Virtual spaces, Visual imagery
@inproceedings{koizumi_development_2025,
title = {Development and Assessment of a System to Help Students Improve Self-compassion},
author = {M. Koizumi and M. Ohsuga and J. M. Corchado},
editor = {Chinthaginjala R. and Sitek P. and Min-Allah N. and Matsui K. and Ossowski S. and Rodríguez S.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85218979175&doi=10.1007%2f978-3-031-82073-1_5&partnerID=40&md5=b136d4a114ce5acfa89f907ccecc145f},
doi = {10.1007/978-3-031-82073-1_5},
isbn = {23673370 (ISSN); 978-303182072-4 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Networks Syst.},
volume = {1259},
pages = {43–52},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {Mental health issues are becoming more prevalent among university students. The mindful self-compassion (MSC) training program, which was introduced to address this issue, has shown some efficacy. However, many people, particularly Japanese people, have difficulty recalling visual imagery or feel uncomfortable or resistant to treating themselves with compassion. This study proposes and develops a system that uses virtual space and avatars to help individuals improve their self-compassion. In the proposed system, the user first selects an avatar of a person with whom to talk (hereafter referred to as “partner”), and then talks about the problem to the avatar of his/her choice. Next, the user changes viewpoints and listens to the problem as the partner’s avatar and responds with compassion. Finally, the user returns to his/her own avatar and listens to the compassionate response spoken as the partner avatar. We first conducted surveys to understand the important system components, and then developed prototypes. In light of the results of the experiments, we improved the prototype by introducing a generative AI. The first prototype used the user’s spoken voice as it was, but the improved system uses the generative AI to organize and convert the voice and present it. In addition, we added a function to generate and add advice with compression. The proposed system is expected to contribute to the improvement of students’ self-compassion. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.},
keywords = {Avatar, Generative adversarial networks, Generative AI, Health issues, Mental health, Self-compassion, Students, Training program, University students, Virtual avatar, Virtual environments, Virtual Reality, Virtual Space, Virtual spaces, Visual imagery},
pubstate = {published},
tppubtype = {inproceedings}
}
Casas, L.; Hannah, S.; Mitchell, K.
HoloJig: Interactive Spoken Prompt Specified Generative AI Environments Journal Article
In: IEEE Computer Graphics and Applications, vol. 45, no. 2, pp. 69–77, 2025, ISSN: 02721716 (ISSN).
Abstract | Links | BibTeX | Tags: 3-D rendering, Article, Collaborative workspace, customer experience, Economic and social effects, generative artificial intelligence, human, Immersive, Immersive environment, parallax, Real- time, simulation, Simulation training, speech, Time based, Virtual environments, Virtual Reality, Virtual reality experiences, Virtual spaces, VR systems
@article{casas_holojig_2025,
title = {HoloJig: Interactive Spoken Prompt Specified Generative AI Environments},
author = {L. Casas and S. Hannah and K. Mitchell},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105001182100&doi=10.1109%2fMCG.2025.3553780&partnerID=40&md5=ec5dc44023314b6f9221169357d81dcd},
doi = {10.1109/MCG.2025.3553780},
issn = {02721716 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Computer Graphics and Applications},
volume = {45},
number = {2},
pages = {69–77},
abstract = {HoloJig offers an interactive, speech-to-virtual reality (VR), VR experience that generates diverse environments in real time based on live spoken descriptions. Unlike traditional VR systems that rely on prebuilt assets, HoloJig dynamically creates personalized and immersive virtual spaces with depth-based parallax 3-D rendering, allowing users to define the characteristics of their immersive environment through verbal prompts. This generative approach opens up new possibilities for interactive experiences, including simulations, training, collaborative workspaces, and entertainment. In addition to speech-to-VR environment generation, a key innovation of HoloJig is its progressive visual transition mechanism, which smoothly dissolves between previously generated and newly requested environments, mitigating the delay caused by neural computations. This feature ensures a seamless and continuous user experience, even as new scenes are being rendered on remote servers. © 1981-2012 IEEE.},
keywords = {3-D rendering, Article, Collaborative workspace, customer experience, Economic and social effects, generative artificial intelligence, human, Immersive, Immersive environment, parallax, Real- time, simulation, Simulation training, speech, Time based, Virtual environments, Virtual Reality, Virtual reality experiences, Virtual spaces, VR systems},
pubstate = {published},
tppubtype = {article}
}
Sajiukumar, A.; Ranjan, A.; Parvathi, P. K.; Satheesh, A.; Udayan, J. Divya; Subramaniam, U.
Generative AI-Enabled Virtual Twin for Meeting Assistants Proceedings Article
In: T., Saba; A., Rehman (Ed.): Proc. - Int. Women Data Sci. Conf. at Prince Sultan Univ., WiDS-PSU, pp. 60–65, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 979-833152092-2 (ISBN).
Abstract | Links | BibTeX | Tags: 3D avatar generation, 3D Avatars, 3D reconstruction, AI-augmented interaction, Augmented Reality, Communication and collaborations, Conversational AI, Neural radiation field, neural radiation fields (NeRF), Radiation field, Real time performance, real-time performance, Three dimensional computer graphics, Virtual spaces, Voice cloning
@inproceedings{sajiukumar_generative_2025,
title = {Generative AI-Enabled Virtual Twin for Meeting Assistants},
author = {A. Sajiukumar and A. Ranjan and P. K. Parvathi and A. Satheesh and J. Divya Udayan and U. Subramaniam},
editor = {Saba T. and Rehman A.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105007691247&doi=10.1109%2fWiDS-PSU64963.2025.00025&partnerID=40&md5=f0bfb74a8f854c427054c73582909185},
doi = {10.1109/WiDS-PSU64963.2025.00025},
isbn = {979-833152092-2 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - Int. Women Data Sci. Conf. at Prince Sultan Univ., WiDS-PSU},
pages = {60–65},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {The growing dependence on virtual spaces for communication and collaboration has transformed interactions in numerous industries, ranging from professional meetings to education, entertainment, and healthcare. Despite the advancement of AI technologies such as three-dimensional modeling, voice cloning, and conversational AI, the convergence of these technologies in a single platform is still challenging. This paper introduces a unified framework that brings together state-of-the-art 3D avatar generation, real-time voice cloning, and conversational AI to enhance virtual interactions. The system utilizes Triplane neural representations and neural radiation fields (NeRF) for high-fidelity 3D avatar generation, speaker encoders coupled with Tacotron 2 and WaveRNN for natural voice cloning, and a context-aware chat algorithm for adaptive conversations. By overcoming the challenges of customization, integration, and real-time performance, the proposed framework addresses the increasing needs for realistic virtual representations, setting new benchmarks for AI-augmented interaction in virtual conferences, online representation, education, and healthcare. © 2025 IEEE.},
keywords = {3D avatar generation, 3D Avatars, 3D reconstruction, AI-augmented interaction, Augmented Reality, Communication and collaborations, Conversational AI, Neural radiation field, neural radiation fields (NeRF), Radiation field, Real time performance, real-time performance, Three dimensional computer graphics, Virtual spaces, Voice cloning},
pubstate = {published},
tppubtype = {inproceedings}
}
2024
Geetha, S.; Aditya, G.; Reddy, M. Chetan; Nischith, G.
Human Interaction in Virtual and Mixed Reality Through Hand Tracking Proceedings Article
In: Proc. CONECCT - IEEE Int. Conf. Electron., Comput. Commun. Technol., Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-835038592-2 (ISBN).
Abstract | Links | BibTeX | Tags: Computer interaction, Computer simulation languages, Daily lives, Digital elevation model, Hand gesture, hand tracking, Hand-tracking, human-computer interaction, Humaninteraction, Interaction dynamics, Mixed reality, Unity, User friendly interface, User interfaces, Virtual environments, Virtual Reality, Virtual spaces
@inproceedings{geetha_human_2024,
title = {Human Interaction in Virtual and Mixed Reality Through Hand Tracking},
author = {S. Geetha and G. Aditya and M. Chetan Reddy and G. Nischith},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205768661&doi=10.1109%2fCONECCT62155.2024.10677239&partnerID=40&md5=173e590ca9a1e30b760d05af562f311a},
doi = {10.1109/CONECCT62155.2024.10677239},
isbn = {979-835038592-2 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. CONECCT - IEEE Int. Conf. Electron., Comput. Commun. Technol.},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {This paper explores the potential and possibilities of hand tracking in virtual reality (VR) and mixed reality (MR), focusing on its role in human interaction dynamics. An application was designed in Unity leveraging the XR Interaction toolkit, within which various items across three important domains: daily life, education, and recreation, were crafted to demonstrate the versatility of hand tracking along with hand gesture-based shortcuts for interaction. Integration of elements in MR ensures that users can seamlessly enjoy virtual experiences while remaining connected to their physical surroundings. Precise hand tracking enables effortless interaction with the virtual space, enhancing presence and control with a user-friendly interface. Additionally, the paper explores the effectiveness of integrating hand tracking into education and training scenarios. A computer assembly simulation was created to demonstrate this, featuring component inspection and zoom capabilities along with a large language model (LLM) integrated with hand gestures to provide for interaction capabilities. © 2024 IEEE.},
keywords = {Computer interaction, Computer simulation languages, Daily lives, Digital elevation model, Hand gesture, hand tracking, Hand-tracking, human-computer interaction, Humaninteraction, Interaction dynamics, Mixed reality, Unity, User friendly interface, User interfaces, Virtual environments, Virtual Reality, Virtual spaces},
pubstate = {published},
tppubtype = {inproceedings}
}
Yang, S.; Tsui, Y. H.; Wang, X.; Alhilal, A.; Mogavi, R. H.; Wang, X.; Hui, P.
From Prompt to Metaverse: User Perceptions of Personalized Spaces Crafted by Generative AI Proceedings Article
In: M., Bernstein; A., Bruckman; U., Gadiraju; A., Halfaker; X., Ma; F., Pinatti; M., Redi; D., Ribes; S., Savage; A., Zhang (Ed.): Proc. ACM Conf. Comput. Support. Coop. Work CSCW, pp. 497–504, Association for Computing Machinery, 2024, ISBN: 979-840071114-5 (ISBN).
Abstract | Links | BibTeX | Tags: AI-generated content, Content creation, Generation tools, Generative adversarial networks, generative artificial intelligence, HCI, Metaverse, Metaverses, Personalization, Personalizations, Space generation, User perceptions, Virtual environments, Virtual Reality, Virtual spaces
@inproceedings{yang_prompt_2024,
title = {From Prompt to Metaverse: User Perceptions of Personalized Spaces Crafted by Generative AI},
author = {S. Yang and Y. H. Tsui and X. Wang and A. Alhilal and R. H. Mogavi and X. Wang and P. Hui},
editor = {Bernstein M. and Bruckman A. and Gadiraju U. and Halfaker A. and Ma X. and Pinatti F. and Redi M. and Ribes D. and Savage S. and Zhang A.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85214583858&doi=10.1145%2f3678884.3681897&partnerID=40&md5=c914b32a0cee1520712062e6ec35eb3a},
doi = {10.1145/3678884.3681897},
isbn = {979-840071114-5 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. ACM Conf. Comput. Support. Coop. Work CSCW},
pages = {497–504},
publisher = {Association for Computing Machinery},
abstract = {Generative artificial intelligence (AI) has revolutionized content creation. In parallel, the Metaverse has emerged to transcend the constraints of our physical reality. While Generative AI has a multitude of exciting applications for the fields of writing, coding, and graphic design, its usage to personalize our virtual space has not yet been explored. In this paper, we investigate the application of Artificial Intelligence Generated Content (AIGC) to personalize our virtual spaces and enhance the metaverse experience. To this end, we present a pipeline to enable users to customize their virtual spaces. Moreover, we explore the hardware resources and latency required for personalized spaces, as well as user acceptance of the AI-generated spaces. Comprehensive user studies follow extensive system experiments. Our research evaluates users' perceptions of two generated spaces: panoramic images and 3D virtual spaces. According to our findings, users have shown a great interest in 3D personalized spaces, and the practicality and immersion of 3D space generation tools surpass panoramic space generation tools. © 2024 ACM.},
keywords = {AI-generated content, Content creation, Generation tools, Generative adversarial networks, generative artificial intelligence, HCI, Metaverse, Metaverses, Personalization, Personalizations, Space generation, User perceptions, Virtual environments, Virtual Reality, Virtual spaces},
pubstate = {published},
tppubtype = {inproceedings}
}
Numan, N.; Rajaram, S.; Kumaravel, B. T.; Marquardt, N.; Wilson, A. D.
SpaceBlender: Creating Context-Rich Collaborative Spaces Through Generative 3D Scene Blending Proceedings Article
In: UIST - Proc. Annual ACM Symp. User Interface Softw. Technol., Association for Computing Machinery, Inc, 2024, ISBN: 979-840070628-8 (ISBN).
Abstract | Links | BibTeX | Tags: 3D modeling, 3D scenes, 3D spaces, AI techniques, Artificial environments, Collaborative spaces, Collaborative tasks, Generative adversarial networks, Generative AI, Telepresence, Virtual environments, Virtual Reality, Virtual reality telepresence, Virtual spaces, VR telepresence
@inproceedings{numan_spaceblender_2024,
title = {SpaceBlender: Creating Context-Rich Collaborative Spaces Through Generative 3D Scene Blending},
author = {N. Numan and S. Rajaram and B. T. Kumaravel and N. Marquardt and A. D. Wilson},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85209252034&doi=10.1145%2f3654777.3676361&partnerID=40&md5=8744057832f9098eabfd16c8b2b5fe62},
doi = {10.1145/3654777.3676361},
isbn = {979-840070628-8 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {UIST - Proc. Annual ACM Symp. User Interface Softw. Technol.},
publisher = {Association for Computing Machinery, Inc},
abstract = {There is increased interest in using generative AI to create 3D spaces for Virtual Reality (VR) applications. However, today's models produce artificial environments, falling short of supporting collaborative tasks that benefit from incorporating the user's physical context. To generate environments that support VR telepresence, we introduce SpaceBlender, a novel pipeline that utilizes generative AI techniques to blend users' physical surroundings into unified virtual spaces. This pipeline transforms user-provided 2D images into context-rich 3D environments through an iterative process consisting of depth estimation, mesh alignment, and diffusion-based space completion guided by geometric priors and adaptive text prompts. In a preliminary within-subjects study, where 20 participants performed a collaborative VR affinity diagramming task in pairs, we compared SpaceBlender with a generic virtual environment and a state-of-the-art scene generation framework, evaluating its ability to create virtual spaces suitable for collaboration. Participants appreciated the enhanced familiarity and context provided by SpaceBlender but also noted complexities in the generative environments that could detract from task focus. Drawing on participant feedback, we propose directions for improving the pipeline and discuss the value and design of blended spaces for different scenarios. © 2024 ACM.},
keywords = {3D modeling, 3D scenes, 3D spaces, AI techniques, Artificial environments, Collaborative spaces, Collaborative tasks, Generative adversarial networks, Generative AI, Telepresence, Virtual environments, Virtual Reality, Virtual reality telepresence, Virtual spaces, VR telepresence},
pubstate = {published},
tppubtype = {inproceedings}
}
Chen, X.; Gao, W.; Chu, Y.; Song, Y.
Enhancing interaction in virtual-real architectural environments: A comparative analysis of generative AI-driven reality approaches Journal Article
In: Building and Environment, vol. 266, 2024, ISSN: 03601323 (ISSN).
Abstract | Links | BibTeX | Tags: Architectural design, Architectural environment, Architectural environments, Artificial intelligence, cluster analysis, Comparative analyzes, comparative study, Computational design, Generative adversarial networks, Generative AI, generative artificial intelligence, Mixed reality, Real time interactions, Real-space, Unity3d, Virtual addresses, Virtual environments, Virtual Reality, Virtual spaces, Work-flows
@article{chen_enhancing_2024,
title = {Enhancing interaction in virtual-real architectural environments: A comparative analysis of generative AI-driven reality approaches},
author = {X. Chen and W. Gao and Y. Chu and Y. Song},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205298350&doi=10.1016%2fj.buildenv.2024.112113&partnerID=40&md5=8c7d4f5477e25b021dfc5e013a851620},
doi = {10.1016/j.buildenv.2024.112113},
issn = {03601323 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {Building and Environment},
volume = {266},
abstract = {The architectural environment is expanding into digital, virtual, and informational dimensions, introducing challenges in virtual-real space interaction. Traditional design methods struggle with real-time interaction, integration with existing workflows, and rapid space modification. To address these issues, we present a generative design method that enables symbiotic interaction between virtual and real spaces using Mixed Reality (MR) and Generative Artificial Intelligence (AI) technologies. We developed two approaches: one using the Rhino modeling platform and the other based on the Unity3D game engine, tailored to different application needs. User experience testing in exhibition, leisure, and residential spaces evaluated our method's effectiveness. Results showed significant improvements in design flexibility, interactive efficiency, and user satisfaction. In the exhibition scenario, the Unity3D-based method excelled in rapid design modifications and immersive experiences. Questionnaire data indicated that MR offers good visual comfort and higher immersion than VR, effectively supporting architects in interface and scale design. Clustering analysis of participants' position and gaze data revealed diverse behavioral patterns in the virtual-physical exhibition space, providing insights for optimizing spatial layouts and interaction methods. Our findings suggest that the generative AI-driven MR method simplifies traditional design processes by enabling real-time modification and interaction with spatial interfaces through simple verbal and motion interactions. This approach streamlines workflows by reducing steps like measuring, modeling, and rendering, while enhancing user engagement and creativity. Overall, this method offers new possibilities for experiential exhibition and architectural design, contributing to future environments where virtual and real spaces coexist seamlessly. © 2024},
keywords = {Architectural design, Architectural environment, Architectural environments, Artificial intelligence, cluster analysis, Comparative analyzes, comparative study, Computational design, Generative adversarial networks, Generative AI, generative artificial intelligence, Mixed reality, Real time interactions, Real-space, Unity3d, Virtual addresses, Virtual environments, Virtual Reality, Virtual spaces, Work-flows},
pubstate = {published},
tppubtype = {article}
}