AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
Here you can find the complete list of our publications.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Espinal, W. Y. Arevalo; Jimenez, J.; Corneo, L.
An eXtended Reality Data Transformation Framework for Internet of Things Devices Integration Proceedings Article
In: IoT - Proc. Int. Conf. Internet Things, pp. 10–18, Association for Computing Machinery, Inc, 2025, ISBN: 979-840071285-2 (ISBN).
Abstract | Links | BibTeX | Tags: Application programs, Comprehensive evaluation, Data integration, Data Transformation, Device and Data Integration, Devices integration, Extended reality, Generative AI, Interactive objects, Internet of Things, Language Model, Software runtime, Time-consuming tasks
@inproceedings{arevalo_espinal_extended_2025,
title = {An eXtended Reality Data Transformation Framework for Internet of Things Devices Integration},
author = {W. Y. Arevalo Espinal and J. Jimenez and L. Corneo},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105002862430&doi=10.1145%2f3703790.3703792&partnerID=40&md5=6ba7d70e00e3b0803149854b5744e55e},
doi = {10.1145/3703790.3703792},
isbn = {979-840071285-2 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {IoT - Proc. Int. Conf. Internet Things},
pages = {10–18},
publisher = {Association for Computing Machinery, Inc},
abstract = {The multidisciplinary nature of XR applications makes device and data integration a resource-intensive and time-consuming task, especially in the context of the Internet of Things (IoT). This paper presents Visualize Interactive Objects, VIO for short, a data transformation framework aimed at simplifying visualization and interaction of IoT devices and their data into XR applications. VIO comprises a software runtime (VRT) running on XR headsets, and a JSON-based syntax for defining VIO Descriptions (VDs). The VRT interprets VDs to facilitate visualization and interaction within the application. By raising the level of abstraction, VIO enhances interoperability among XR experiences and enables developers to integrate IoT data with minimal coding effort. A comprehensive evaluation demonstrated that VIO is lightweight, incurring in negligible overhead compared to native implementations. Ten Large Language Models (LLM) were used to generate VDs and native source-code from user intents. The results showed that LLMs have superior syntactical and semantical accuracy in generating VDs compared to native XR application development code, thus indicating that the task of creating VDs can be effectively automated using LLMs. Additionally, a user study with 12 participants found that VIO is developer-friendly and easily extensible. © 2024 Copyright held by the owner/author(s).},
keywords = {Application programs, Comprehensive evaluation, Data integration, Data Transformation, Device and Data Integration, Devices integration, Extended reality, Generative AI, Interactive objects, Internet of Things, Language Model, Software runtime, Time-consuming tasks},
pubstate = {published},
tppubtype = {inproceedings}
}
The multidisciplinary nature of XR applications makes device and data integration a resource-intensive and time-consuming task, especially in the context of the Internet of Things (IoT). This paper presents Visualize Interactive Objects, VIO for short, a data transformation framework aimed at simplifying visualization and interaction of IoT devices and their data into XR applications. VIO comprises a software runtime (VRT) running on XR headsets, and a JSON-based syntax for defining VIO Descriptions (VDs). The VRT interprets VDs to facilitate visualization and interaction within the application. By raising the level of abstraction, VIO enhances interoperability among XR experiences and enables developers to integrate IoT data with minimal coding effort. A comprehensive evaluation demonstrated that VIO is lightweight, incurring in negligible overhead compared to native implementations. Ten Large Language Models (LLM) were used to generate VDs and native source-code from user intents. The results showed that LLMs have superior syntactical and semantical accuracy in generating VDs compared to native XR application development code, thus indicating that the task of creating VDs can be effectively automated using LLMs. Additionally, a user study with 12 participants found that VIO is developer-friendly and easily extensible. © 2024 Copyright held by the owner/author(s).