AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2024
Nebeling, M.; Oki, M.; Gelsomini, M.; Hayes, G. R.; Billinghurst, M.; Suzuki, K.; Graf, R.
Designing Inclusive Future Augmented Realities Proceedings Article
In: Conf Hum Fact Comput Syst Proc, Association for Computing Machinery, 2024, ISBN: 979-840070331-7 (ISBN).
Abstract | Links | BibTeX | Tags: Accessible and inclusive design, Augmented Reality, Augmented reality technology, Display technologies, Generative AI, Inclusive design, Interactive computer graphics, Mixed reality, Mixed reality technologies, Rapid prototyping, Rapid-prototyping, Sensing technology, Spatial computing
@inproceedings{nebeling_designing_2024,
title = {Designing Inclusive Future Augmented Realities},
author = {M. Nebeling and M. Oki and M. Gelsomini and G. R. Hayes and M. Billinghurst and K. Suzuki and R. Graf},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85194176929&doi=10.1145%2f3613905.3636313&partnerID=40&md5=411b65058a4c96149182237aa586fa75},
doi = {10.1145/3613905.3636313},
isbn = {979-840070331-7 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Conf Hum Fact Comput Syst Proc},
publisher = {Association for Computing Machinery},
abstract = {Augmented and mixed reality technology is rapidly advancing, driven by innovations in display, sensing, and AI technologies. This evolution, particularly in the era of generative AI with large language and text-to-image models such as GPT and Stable Diffusion, has the potential, not only to make it easier to create, but also to adapt and personalize, new content. Our workshop explores the pivotal role of augmented and mixed reality to shape a user's interactions with their physical surroundings. We aim to explore how inclusive future augmented realities can be designed, with increasing support for automation, such that environments can welcome users with different needs, emphasizing accessibility and inclusion through layers of augmentations. Our aim is not only to remove barriers by providing accommodations, but also to create a sense of belonging by directly engaging users. Our workshop consists of three main activities: (1) Through brainstorming and discussion of examples provided by the workshop organizers and participants, we critically review the landscape of accessible and inclusive design and their vital role in augmented and mixed reality experiences. (2) Through rapid prototyping activities including bodystorming and low-fidelity, mixed-media prototypes, participants explore how augmented and mixed reality can transform physical space into a more personal place, enhancing accessibility and inclusion based on novel interface and interaction techniques that are desirable, but not necessarily technically feasible just yet. In the workshop, we plan to focus on physical space to facilitate rapid prototyping without technical constraints, but techniques developed in the workshop are likely applicable to immersive virtual environments as well. (3) Finally, we collaborate to outline a research agenda for designing future augmented realities that promote equal opportunities, benefiting diverse user populations. Our workshop inspires innovation in augmented and mixed reality, reshaping physical environments to be more accessible and inclusive through immersive design. © 2024 Owner/Author.},
keywords = {Accessible and inclusive design, Augmented Reality, Augmented reality technology, Display technologies, Generative AI, Inclusive design, Interactive computer graphics, Mixed reality, Mixed reality technologies, Rapid prototyping, Rapid-prototyping, Sensing technology, Spatial computing},
pubstate = {published},
tppubtype = {inproceedings}
}
Constantinides, N.; Constantinides, A.; Koukopoulos, D.; Fidas, C.; Belk, M.
CulturAI: Exploring Mixed Reality Art Exhibitions with Large Language Models for Personalized Immersive Experiences Proceedings Article
In: UMAP - Adjun. Proc. ACM Conf. User Model., Adapt. Personal., pp. 102–105, Association for Computing Machinery, Inc, 2024, ISBN: 979-840070466-6 (ISBN).
Abstract | Links | BibTeX | Tags: Computational Linguistics, Immersive, Language Model, Large language model, large language models, Mixed reality, Mixed reality art, Mixed reality technologies, Model-based OPC, User Experience Evaluation, User experience evaluations, User interfaces, User study, Users' experiences
@inproceedings{constantinides_culturai_2024,
title = {CulturAI: Exploring Mixed Reality Art Exhibitions with Large Language Models for Personalized Immersive Experiences},
author = {N. Constantinides and A. Constantinides and D. Koukopoulos and C. Fidas and M. Belk},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85198910809&doi=10.1145%2f3631700.3664874&partnerID=40&md5=952d82629a3fcdc6e2a960dd532b2c09},
doi = {10.1145/3631700.3664874},
isbn = {979-840070466-6 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {UMAP - Adjun. Proc. ACM Conf. User Model., Adapt. Personal.},
pages = {102–105},
publisher = {Association for Computing Machinery, Inc},
abstract = {Mixed Reality (MR) technologies have transformed the way in which we interact and engage with digital content, offering immersive experiences that blend the physical and virtual worlds. Over the past years, there has been increasing interest in employing Artificial Intelligence (AI) technologies to improve user experience and trustworthiness in cultural contexts. However, the integration of Large Language Models (LLMs) into MR applications within the Cultural Heritage (CH) domain is relatively underexplored. In this work, we present an investigation into the integration of LLMs within MR environments, focusing on the context of virtual art exhibitions. We implemented a HoloLens MR application, which enables users to explore artworks while interacting with an LLM through voice. To evaluate the user experience and perceived trustworthiness of individuals engaging with an LLM-based virtual art guide, we adopted a between-subject study design, in which participants were randomly assigned to either the LLM-based version or a control group using conventional interaction methods. The LLM-based version allows users to pose inquiries about the artwork displayed, ranging from details about the creator to information about the artwork's origin and historical significance. This paper presents the technical aspects of integrating LLMs within MR applications and evaluates the user experience and perceived trustworthiness of this approach in enhancing the exploration of virtual art exhibitions. Results of an initial evaluation provide evidence about the positive aspect of integrating LLMs in MR applications. Findings of this work contribute to the advancement of MR technologies for the development of future interactive personalized art experiences. © 2024 Owner/Author.},
keywords = {Computational Linguistics, Immersive, Language Model, Large language model, large language models, Mixed reality, Mixed reality art, Mixed reality technologies, Model-based OPC, User Experience Evaluation, User experience evaluations, User interfaces, User study, Users' experiences},
pubstate = {published},
tppubtype = {inproceedings}
}