AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
Here you can find the complete list of our publications.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2024
Li, K.; Gulati, M.; Shah, D.; Waskito, S.; Chakrabarty, S.; Varshney, A.
PixelGen: Rethinking Embedded Cameras for Mixed-Reality Proceedings Article
In: ACM MobiCom - Proc. Int. Conf. Mob. Comput. Netw., pp. 2128–2135, Association for Computing Machinery, Inc, 2024, ISBN: 979-840070489-5 (ISBN).
Abstract | Links | BibTeX | Tags: Blind spots, embedded systems, Embedded-system, Field of views, Language Model, Large language model, large language models, Mixed reality, Networking, Partial views, Pixels, Power, Visible spectrums
@inproceedings{li_pixelgen_2024,
title = {PixelGen: Rethinking Embedded Cameras for Mixed-Reality},
author = {K. Li and M. Gulati and D. Shah and S. Waskito and S. Chakrabarty and A. Varshney},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105002721208&doi=10.1145%2f3636534.3696216&partnerID=40&md5=97ee680318c72552b3e642aa57aaeca5},
doi = {10.1145/3636534.3696216},
isbn = {979-840070489-5 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {ACM MobiCom - Proc. Int. Conf. Mob. Comput. Netw.},
pages = {2128–2135},
publisher = {Association for Computing Machinery, Inc},
abstract = {Mixed-reality headsets offer new ways to perceive our environment. They employ visible spectrum cameras to capture and display the environment on screens in front of the user's eyes. However, these cameras lead to limitations. Firstly, they capture only a partial view of the environment. They are positioned to capture whatever is in front of the user, thus creating blind spots during complete immersion and failing to detect events outside the restricted field of view. Secondly, they capture only visible light fields, ignoring other fields like acoustics and radio that are also present in the environment. Finally, these power-hungry cameras rapidly deplete the mixed-reality headset's battery. We introduce PixelGen to rethink embedded cameras for mixed-reality headsets. PixelGen proposes to decouple cameras from the mixed-reality headset and balance resolution and fidelity to minimize the power consumption. It employs low-resolution, monochrome image sensors and environmental sensors to capture the surroundings around the headset. This approach reduces the system's communication bandwidth and power consumption. A transformer-based language and image model process this information to overcome resolution trade-offs, thus generating a higher-resolution representation of the environment. We present initial experiments that show PixelGen's viability. © 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.},
keywords = {Blind spots, embedded systems, Embedded-system, Field of views, Language Model, Large language model, large language models, Mixed reality, Networking, Partial views, Pixels, Power, Visible spectrums},
pubstate = {published},
tppubtype = {inproceedings}
}
Mixed-reality headsets offer new ways to perceive our environment. They employ visible spectrum cameras to capture and display the environment on screens in front of the user's eyes. However, these cameras lead to limitations. Firstly, they capture only a partial view of the environment. They are positioned to capture whatever is in front of the user, thus creating blind spots during complete immersion and failing to detect events outside the restricted field of view. Secondly, they capture only visible light fields, ignoring other fields like acoustics and radio that are also present in the environment. Finally, these power-hungry cameras rapidly deplete the mixed-reality headset's battery. We introduce PixelGen to rethink embedded cameras for mixed-reality headsets. PixelGen proposes to decouple cameras from the mixed-reality headset and balance resolution and fidelity to minimize the power consumption. It employs low-resolution, monochrome image sensors and environmental sensors to capture the surroundings around the headset. This approach reduces the system's communication bandwidth and power consumption. A transformer-based language and image model process this information to overcome resolution trade-offs, thus generating a higher-resolution representation of the environment. We present initial experiments that show PixelGen's viability. © 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.