AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
Here you can find the complete list of our publications.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Shoa, A.; Friedman, D.
Milo: an LLM-based virtual human open-source platform for extended reality Journal Article
In: Frontiers in Virtual Reality, vol. 6, 2025, ISSN: 26734192 (ISSN).
Abstract | Links | BibTeX | Tags: Large language model, open-source, Virtual agent, virtual human, Virtual Reality, XR
@article{shoa_milo_2025,
title = {Milo: an LLM-based virtual human open-source platform for extended reality},
author = {A. Shoa and D. Friedman},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105008867438&doi=10.3389%2ffrvir.2025.1555173&partnerID=40&md5=6e68c9604b5ae52671b2ff02d51c7e75},
doi = {10.3389/frvir.2025.1555173},
issn = {26734192 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {Frontiers in Virtual Reality},
volume = {6},
abstract = {Large language models (LLMs) have made dramatic advancements in recent years, allowing for a new generation of dialogue agents. This allows for new types of social experiences with virtual humans, in both virtual and augmented reality. In this paper, we introduce an open-source system specifically designed for implementing LLM-based virtual humans within extended reality (XR) environments. Our system integrates into XR platforms, providing a robust framework for the creation and management of interactive virtual agents. We detail the design and architecture of the system and showcase the system’s versatility through various scenarios. In addition to a straightforward single-agent setup, we demonstrate how an LLM-based virtual human can attend a multi-user virtual reality (VR) meeting, enhance a VR self-talk session, and take part in an augmented reality (AR) live event. We provide lessons learned, with focus on the possibilities for human intervention during live events. We provide the system as open-source, inviting collaboration and innovation within the community, paving the way for new types of social experiences. Copyright © 2025 Shoa and Friedman.},
keywords = {Large language model, open-source, Virtual agent, virtual human, Virtual Reality, XR},
pubstate = {published},
tppubtype = {article}
}
Large language models (LLMs) have made dramatic advancements in recent years, allowing for a new generation of dialogue agents. This allows for new types of social experiences with virtual humans, in both virtual and augmented reality. In this paper, we introduce an open-source system specifically designed for implementing LLM-based virtual humans within extended reality (XR) environments. Our system integrates into XR platforms, providing a robust framework for the creation and management of interactive virtual agents. We detail the design and architecture of the system and showcase the system’s versatility through various scenarios. In addition to a straightforward single-agent setup, we demonstrate how an LLM-based virtual human can attend a multi-user virtual reality (VR) meeting, enhance a VR self-talk session, and take part in an augmented reality (AR) live event. We provide lessons learned, with focus on the possibilities for human intervention during live events. We provide the system as open-source, inviting collaboration and innovation within the community, paving the way for new types of social experiences. Copyright © 2025 Shoa and Friedman.