AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
Here you can find the complete list of our publications.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2024
Li, Z.; Gebhardt, C.; Inglin, Y.; Steck, N.; Streli, P.; Holz, C.
SituationAdapt: Contextual UI Optimization in Mixed Reality with Situation Awareness via LLM Reasoning Proceedings Article
In: UIST - Proc. Annual ACM Symp. User Interface Softw. Technol., Association for Computing Machinery, Inc, 2024, ISBN: 979-840070628-8 (ISBN).
Abstract | Links | BibTeX | Tags: Adaptive user interface, Adaptive User Interfaces, Environmental cues, Language Model, Large language model, large language models, Mixed reality, Mobile settings, Office space, Optimisations, Optimization module, Situation awareness
@inproceedings{li_situationadapt_2024,
title = {SituationAdapt: Contextual UI Optimization in Mixed Reality with Situation Awareness via LLM Reasoning},
author = {Z. Li and C. Gebhardt and Y. Inglin and N. Steck and P. Streli and C. Holz},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85211434729&doi=10.1145%2f3654777.3676470&partnerID=40&md5=596480194c46ab2753cffeb8cce22243},
doi = {10.1145/3654777.3676470},
isbn = {979-840070628-8 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {UIST - Proc. Annual ACM Symp. User Interface Softw. Technol.},
publisher = {Association for Computing Machinery, Inc},
abstract = {Mixed Reality is increasingly used in mobile settings beyond controlled home and office spaces. This mobility introduces the need for user interface layouts that adapt to varying contexts. However, existing adaptive systems are designed only for static environments. In this paper, we introduce SituationAdapt, a system that adjusts Mixed Reality UIs to real-world surroundings by considering environmental and social cues in shared settings. Our system consists of perception, reasoning, and optimization modules for UI adaptation. Our perception module identifies objects and individuals around the user, while our reasoning module leverages a Vision-and-Language Model to assess the placement of interactive UI elements. This ensures that adapted layouts do not obstruct relevant environmental cues or interfere with social norms. Our optimization module then generates Mixed Reality interfaces that account for these considerations as well as temporal constraints. For evaluation, we first validate our reasoning module's capability of assessing UI contexts in comparison to human expert users. In an online user study, we then establish SituationAdapt's capability of producing context-aware layouts for Mixed Reality, where it outperformed previous adaptive layout methods. We conclude with a series of applications and scenarios to demonstrate SituationAdapt's versatility. © 2024 ACM.},
keywords = {Adaptive user interface, Adaptive User Interfaces, Environmental cues, Language Model, Large language model, large language models, Mixed reality, Mobile settings, Office space, Optimisations, Optimization module, Situation awareness},
pubstate = {published},
tppubtype = {inproceedings}
}
Mixed Reality is increasingly used in mobile settings beyond controlled home and office spaces. This mobility introduces the need for user interface layouts that adapt to varying contexts. However, existing adaptive systems are designed only for static environments. In this paper, we introduce SituationAdapt, a system that adjusts Mixed Reality UIs to real-world surroundings by considering environmental and social cues in shared settings. Our system consists of perception, reasoning, and optimization modules for UI adaptation. Our perception module identifies objects and individuals around the user, while our reasoning module leverages a Vision-and-Language Model to assess the placement of interactive UI elements. This ensures that adapted layouts do not obstruct relevant environmental cues or interfere with social norms. Our optimization module then generates Mixed Reality interfaces that account for these considerations as well as temporal constraints. For evaluation, we first validate our reasoning module's capability of assessing UI contexts in comparison to human expert users. In an online user study, we then establish SituationAdapt's capability of producing context-aware layouts for Mixed Reality, where it outperformed previous adaptive layout methods. We conclude with a series of applications and scenarios to demonstrate SituationAdapt's versatility. © 2024 ACM.