AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Xing, Y.; Ban, J.; Hubbard, T. D.; Villano, M.; Gómez-Zará, D.
Immersed in my Ideas: Using Virtual Reality and LLMs to Visualize Users’ Ideas and Thoughts Proceedings Article
In: Int Conf Intell User Interfaces Proc IUI, pp. 60–65, Association for Computing Machinery, 2025, ISBN: 979-840071409-2 (ISBN).
Abstract | Links | BibTeX | Tags: 3-D environments, 3D modeling, Computer simulation languages, Creativity, Idea Generation, Immersive, Interactive virtual reality, Language Model, Large language model, Multimodal Interaction, Reflection, Text Visualization, Think aloud, Virtual environments, Virtual Reality, Visualization
@inproceedings{xing_immersed_2025,
title = {Immersed in my Ideas: Using Virtual Reality and LLMs to Visualize Users’ Ideas and Thoughts},
author = {Y. Xing and J. Ban and T. D. Hubbard and M. Villano and D. Gómez-Zará},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105001675169&doi=10.1145%2f3708557.3716330&partnerID=40&md5=20fb0623d2a1fff92282116b01fac4f3},
doi = {10.1145/3708557.3716330},
isbn = {979-840071409-2 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Int Conf Intell User Interfaces Proc IUI},
pages = {60–65},
publisher = {Association for Computing Machinery},
abstract = {We introduce the Voice Interactive Virtual Reality Annotation (VIVRA), an application that employs Large Language Models to facilitate brainstorming and idea exploration in an immersive 3D environment. As users think aloud to brainstorm and ideate, the application automatically detects, summarizes, suggests, and connects their ideas in real time. The experience brings participants into a room where their ideas emerge as interactive objects that embody the topics detected from their ideas. We evaluated the effectiveness of VIVRA in an exploratory study with 29 participants, followed by a user study with 10 participants comparing the application with other visualizations. Our results show that VIVRA helped participants reflect and think more about their ideas, serving as a valuable tool for personal exploration. We discuss the potential benefits and applications, highlighting the benefits of combining immersive 3D spaces and LLMs to explore, learn, and reflect on ideas. © 2025 Copyright held by the owner/author(s).},
keywords = {3-D environments, 3D modeling, Computer simulation languages, Creativity, Idea Generation, Immersive, Interactive virtual reality, Language Model, Large language model, Multimodal Interaction, Reflection, Text Visualization, Think aloud, Virtual environments, Virtual Reality, Visualization},
pubstate = {published},
tppubtype = {inproceedings}
}
2024
Manesh, S. A.; Zhang, T.; Onishi, Y.; Hara, K.; Bateman, S.; Li, J.; Tang, A.
How People Prompt Generative AI to Create Interactive VR Scenes Proceedings Article
In: A., Vallgarda; L., Jonsson; J., Fritsch; S.F., Alaoui; C.A., Le Dantec (Ed.): Proc. ACM Des. Interact. Syst. Conf., pp. 2319–2340, Association for Computing Machinery, Inc, 2024, ISBN: 979-840070583-0 (ISBN).
Abstract | Links | BibTeX | Tags: Embodied interaction, Embodied knowledge, Embodied prompting, Generative AI, Interactive virtual reality, Multi-modal, Natural languages, Programming agents, Prompting, User interfaces, Virtual Reality, Wizard of Oz
@inproceedings{manesh_how_2024,
title = {How People Prompt Generative AI to Create Interactive VR Scenes},
author = {S. A. Manesh and T. Zhang and Y. Onishi and K. Hara and S. Bateman and J. Li and A. Tang},
editor = {Vallgarda A. and Jonsson L. and Fritsch J. and Alaoui S.F. and Le Dantec C.A.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85200348302&doi=10.1145%2f3643834.3661547&partnerID=40&md5=11831bb65214fd75905ccdaeb8356cdf},
doi = {10.1145/3643834.3661547},
isbn = {979-840070583-0 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. ACM Des. Interact. Syst. Conf.},
pages = {2319–2340},
publisher = {Association for Computing Machinery, Inc},
abstract = {Generative AI tools can provide people with the ability to create virtual environments and scenes with natural language prompts. Yet, how people will formulate such prompts is unclear—particularly when they inhabit the environment that they are designing. For instance, it is likely that a person might say, “Put a chair here,” while pointing at a location. If such linguistic and embodied features are common to people’s prompts, we need to tune models to accommodate them. In this work, we present a Wizard of Oz elicitation study with 22 participants, where we studied people’s implicit expectations when verbally prompting such programming agents to create interactive VR scenes. Our fndings show when people prompted the agent, they had several implicit expectations of these agents: (1) they should have an embodied knowledge of the environment; (2) they should understand embodied prompts by users; (3) they should recall previous states of the scene and the conversation, and that (4) they should have a commonsense understanding of objects in the scene. Further, we found that participants prompted diferently when they were prompting in situ (i.e. within the VR environment) versus ex situ (i.e. viewing the VR environment from the outside). To explore how these lessons could be applied, we designed and built Ostaad, a conversational programming agent that allows non-programmers to design interactive VR experiences that they inhabit. Based on these explorations, we outline new opportunities and challenges for conversational programming agents that create VR environments. © 2024 Copyright held by the owner/author(s).},
keywords = {Embodied interaction, Embodied knowledge, Embodied prompting, Generative AI, Interactive virtual reality, Multi-modal, Natural languages, Programming agents, Prompting, User interfaces, Virtual Reality, Wizard of Oz},
pubstate = {published},
tppubtype = {inproceedings}
}