AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Ozeki, R.; Yonekura, H.; Rizk, H.; Yamaguchi, H.
Cellular-based Indoor Localization with Adapted LLM and Label-aware Contrastive Learning Proceedings Article
In: pp. 138–145, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331586461 (ISBN).
Abstract | Links | BibTeX | Tags: Cellular Network, Cellulars, Computer interaction, Contrastive Learning, Deep learning, Human computer interaction, Indoor Localization, Indoor Navigation, Indoor positioning, Indoor positioning systems, Language Model, Large language model, Learning systems, Mobile computing, Mobile-computing, Signal processing, Smart Environment, Wireless networks
@inproceedings{ozeki_cellular-based_2025,
title = {Cellular-based Indoor Localization with Adapted LLM and Label-aware Contrastive Learning},
author = {R. Ozeki and H. Yonekura and H. Rizk and H. Yamaguchi},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105010820397&doi=10.1109%2FSMARTCOMP65954.2025.00070&partnerID=40&md5=9e15d9f4225f00cd57bedc511aad27d9},
doi = {10.1109/SMARTCOMP65954.2025.00070},
isbn = {9798331586461 (ISBN)},
year = {2025},
date = {2025-01-01},
pages = {138–145},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Accurate indoor positioning is essential for mobile computing, human-computer interaction, and next-generation smart environments, enabling applications in indoor navigation, augmented reality, personalized services, healthcare, and emergency response. Cellular signal fingerprinting has emerged as a widely adopted solution, with deep learning models achieving state-of-the-art performance. However, existing approaches face critical deployment challenges, including labor-intensive fingerprinting, sparse reference points, and missing RSS values caused by environmental interference, hardware variability, and dynamic signal fluctuations. These limitations hinder their scalability, adaptability, and real-world usability in complex indoor environments. To address these challenges, we present GPT2Loc a novel indoor localization framework that integrates LLM with label-aware contrastive learning, improving accuracy while reducing reliance on extensive fingerprinting. LLMs effectively extract meaningful spatial features from incomplete and noisy RSS data, enabling robust localization even in sparsely finger-printed areas. Our label-aware contrastive learning approach further enhances generalization by aligning latent representations with spatial relationships, allowing GPT2Loc to interpolate user locations in unseen areas and mitigate signal inconsistencies. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Cellular Network, Cellulars, Computer interaction, Contrastive Learning, Deep learning, Human computer interaction, Indoor Localization, Indoor Navigation, Indoor positioning, Indoor positioning systems, Language Model, Large language model, Learning systems, Mobile computing, Mobile-computing, Signal processing, Smart Environment, Wireless networks},
pubstate = {published},
tppubtype = {inproceedings}
}
Yang, T.; Zhang, P.; Zheng, M.; Li, N.; Ma, S.
EnvReconGPT: A Generative AI Model for Wireless Environment Reconstruction in the 6G Metaverse Proceedings Article
In: Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 9798331543709 (ISBN).
Abstract | Links | BibTeX | Tags: Environment reconstruction, Generative model, Generative pretrained transformer, High-fidelity, Imaging problems, Integrated sensing, Integrated sensing and communication, Metaverses, Mobile telecommunication systems, Signal processing, Three dimensional computer graphics, Wireless environment, Wireless environment reconstruction
@inproceedings{yang_envrecongpt_2025,
title = {EnvReconGPT: A Generative AI Model for Wireless Environment Reconstruction in the 6G Metaverse},
author = {T. Yang and P. Zhang and M. Zheng and N. Li and S. Ma},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105017962023&doi=10.1109%2FINFOCOMWKSHPS65812.2025.11152849&partnerID=40&md5=3d776e0774732f91291ba6ff90078957},
doi = {10.1109/INFOCOMWKSHPS65812.2025.11152849},
isbn = {9798331543709 (ISBN)},
year = {2025},
date = {2025-01-01},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {This study introduces EnvReconGPT, a Transformer-based generative model specifically designed for wireless environment reconstruction in the 6G Metaverse, with a parameter size of 3.5 billion. The model leverages the generative capabilities of Transformers to frame wireless environment reconstruction as an imaging problem, treating it as a task of generating high-fidelity 3D point clouds. A novel multimodal mechanism is proposed to fuse base station positional information with Channel Frequency Response data, enabling the model to fully capture the spatial and spectral characteristics of the environment. Additionally, a feature embedding method is developed to integrate these multimodal inputs into the Transformer architecture effectively. By employing the Chamfer Distance as the loss function, EnvReconGPT ensures precise geometric alignment between predicted and ground truth point clouds, achieving robust performance across diverse scenarios. This work highlights the potential of generative AI in advancing ISAC systems and enabling wireless systems to meet the demands of the 6G Metaverse. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Environment reconstruction, Generative model, Generative pretrained transformer, High-fidelity, Imaging problems, Integrated sensing, Integrated sensing and communication, Metaverses, Mobile telecommunication systems, Signal processing, Three dimensional computer graphics, Wireless environment, Wireless environment reconstruction},
pubstate = {published},
tppubtype = {inproceedings}
}
2023
Marín-Morales, J.; Llanes-Jurado, J.; Minissi, M. E.; Gomez-Zaragoza, L.; Altozano, A.; Raya, M. Alcañiz
Gaze and Head Movement Patterns of Depressive Symptoms During Conversations with Emotional Virtual Humans Proceedings Article
In: Int. Conf. Affect. Comput. Intell. Interact., ACII, Institute of Electrical and Electronics Engineers Inc., 2023, ISBN: 9798350327434 (ISBN).
Abstract | Links | BibTeX | Tags: Biomarkers, Clustering, Clusterings, Computational Linguistics, Depressive disorder, Depressive symptom, E-Learning, Emotion elicitation, Eye movements, Gaze movements, K-means clustering, Language Model, Large language model, large language models, Learning systems, Mental health, Multivariant analysis, Signal processing, Statistical learning, virtual human, Virtual humans, Virtual Reality
@inproceedings{marin-morales_gaze_2023,
title = {Gaze and Head Movement Patterns of Depressive Symptoms During Conversations with Emotional Virtual Humans},
author = {J. Marín-Morales and J. Llanes-Jurado and M. E. Minissi and L. Gomez-Zaragoza and A. Altozano and M. Alcañiz Raya},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85184656388&doi=10.1109%2FACII59096.2023.10388134&partnerID=40&md5=cab248b0afb9c55ee54331522c0dd30d},
doi = {10.1109/ACII59096.2023.10388134},
isbn = {9798350327434 (ISBN)},
year = {2023},
date = {2023-01-01},
booktitle = {Int. Conf. Affect. Comput. Intell. Interact., ACII},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Depressive symptoms involve dysfunctional social attitudes and heightened negative emotional states. Identifying biomarkers requires data collection in realistic environments that activate depression-specific phenomena. However, no previous research analysed biomarkers in combination with AI-powered conversational virtual humans (VH) for mental health assessment. This study aims to explore gaze and head movements patterns related to depressive symptoms during conversations with emotional VH. A total of 105 participants were evenly divided into a control group and a group of subjects with depressive symptoms (SDS). They completed six semi-guided conversations designed to evoke basic emotions. The VHs were developed using a cognitive-inspired framework, enabling real-time voice-based conversational interactions powered by a Large Language Model, and including emotional facial expressions and lip synchronization. They have embedded life-history, context, attitudes, emotions and motivations. Signal processing techniques were applied to obtain gaze and head movements features, and heatmaps were generated. Then, parametric and non-parametric statistical tests were applied to evaluate differences between groups. Additionally, a two-dimensional t-SNE embedding was created and combined with k-means clustering. Results indicate that SDS exhibited shorter blinks and longer saccades. The control group showed affiliative lateral head gyros and accelerations, while the SDS demonstrated stress-related back-and-forth movements. SDS also displayed the avoidance of eye contact. The exploratory multivariate statistical unsupervised learning achieved 72.3% accuracy. The present study analyse biomarkers in affective processes with multiple social contextual factors and information modalities in ecological environments, and enhances our understanding of gaze and head movements patterns in individuals with depressive symptoms, ultimately contributing to the development of more effective assessments and intervention strategies. © 2024 Elsevier B.V., All rights reserved.},
keywords = {Biomarkers, Clustering, Clusterings, Computational Linguistics, Depressive disorder, Depressive symptom, E-Learning, Emotion elicitation, Eye movements, Gaze movements, K-means clustering, Language Model, Large language model, large language models, Learning systems, Mental health, Multivariant analysis, Signal processing, Statistical learning, virtual human, Virtual humans, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}