AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
Here you can find the complete list of our publications.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Scofano, L.; Sampieri, A.; Matteis, E. De; Spinelli, I.; Galasso, F.
Social EgoMesh Estimation Proceedings Article
In: Proc. - IEEE Winter Conf. Appl. Comput. Vis., WACV, pp. 5948–5958, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 979-833151083-1 (ISBN).
Abstract | Links | BibTeX | Tags: Augmented reality applications, Ego-motion, Egocentric view, Generative AI, Human behaviors, Human mesh recovery, Limited visibility, Recent researches, Three dimensional computer graphics, Video sequences, Virtual and augmented reality
@inproceedings{scofano_social_2025,
title = {Social EgoMesh Estimation},
author = {L. Scofano and A. Sampieri and E. De Matteis and I. Spinelli and F. Galasso},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105003632729&doi=10.1109%2fWACV61041.2025.00580&partnerID=40&md5=3c2b2d069ffb596c64ee8dbc211b74a8},
doi = {10.1109/WACV61041.2025.00580},
isbn = {979-833151083-1 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Winter Conf. Appl. Comput. Vis., WACV},
pages = {5948–5958},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Accurately estimating the 3D pose of the camera wearer in egocentric video sequences is crucial to modeling human behavior in virtual and augmented reality applications. The task presents unique challenges due to the limited visibility of the user's body caused by the front-facing camera mounted on their head. Recent research has explored the utilization of the scene and ego-motion, but it has overlooked humans' interactive nature. We propose a novel framework for Social Egocentric Estimation of body MEshes (SEE-ME). Our approach is the first to estimate the wearer's mesh using only a latent probabilistic diffusion model, which we condition on the scene and, for the first time, on the social wearer-interactee interactions. Our in-depth study sheds light on when social interaction matters most for ego-mesh estimation; it quantifies the impact of interpersonal distance and gaze direction. Overall, SEEME surpasses the current best technique, reducing the pose estimation error (MPJPE) by 53%. The code is available at SEEME. © 2025 IEEE.},
keywords = {Augmented reality applications, Ego-motion, Egocentric view, Generative AI, Human behaviors, Human mesh recovery, Limited visibility, Recent researches, Three dimensional computer graphics, Video sequences, Virtual and augmented reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Accurately estimating the 3D pose of the camera wearer in egocentric video sequences is crucial to modeling human behavior in virtual and augmented reality applications. The task presents unique challenges due to the limited visibility of the user's body caused by the front-facing camera mounted on their head. Recent research has explored the utilization of the scene and ego-motion, but it has overlooked humans' interactive nature. We propose a novel framework for Social Egocentric Estimation of body MEshes (SEE-ME). Our approach is the first to estimate the wearer's mesh using only a latent probabilistic diffusion model, which we condition on the scene and, for the first time, on the social wearer-interactee interactions. Our in-depth study sheds light on when social interaction matters most for ego-mesh estimation; it quantifies the impact of interpersonal distance and gaze direction. Overall, SEEME surpasses the current best technique, reducing the pose estimation error (MPJPE) by 53%. The code is available at SEEME. © 2025 IEEE.