AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
Here you can find the complete list of our publications.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2024
Lakhnati, Y.; Pascher, M.; Gerken, J.
Exploring a GPT-based large language model for variable autonomy in a VR-based human-robot teaming simulation Journal Article
In: Frontiers in Robotics and AI, vol. 11, 2024, ISSN: 22969144 (ISSN).
Abstract | Links | BibTeX | Tags: Assistive Robots, evaluation, GPT, Large language model, shared control, variable autonomy, Virtual Reality
@article{lakhnati_exploring_2024,
title = {Exploring a GPT-based large language model for variable autonomy in a VR-based human-robot teaming simulation},
author = {Y. Lakhnati and M. Pascher and J. Gerken},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85190520269&doi=10.3389%2ffrobt.2024.1347538&partnerID=40&md5=ba5dcbba299b475c3448d2ea6b493894},
doi = {10.3389/frobt.2024.1347538},
issn = {22969144 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {Frontiers in Robotics and AI},
volume = {11},
abstract = {In a rapidly evolving digital landscape autonomous tools and robots are becoming commonplace. Recognizing the significance of this development, this paper explores the integration of Large Language Models (LLMs) like Generative pre-trained transformer (GPT) into human-robot teaming environments to facilitate variable autonomy through the means of verbal human-robot communication. In this paper, we introduce a novel simulation framework for such a GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality (VR) setting. This system allows users to interact with simulated robot agents through natural language, each powered by individual GPT cores. By means of OpenAI’s function calling, we bridge the gap between unstructured natural language input and structured robot actions. A user study with 12 participants explores the effectiveness of GPT-4 and, more importantly, user strategies when being given the opportunity to converse in natural language within a simulated multi-robot environment. Our findings suggest that users may have preconceived expectations on how to converse with robots and seldom try to explore the actual language and cognitive capabilities of their simulated robot collaborators. Still, those users who did explore were able to benefit from a much more natural flow of communication and human-like back-and-forth. We provide a set of lessons learned for future research and technical implementations of similar systems. Copyright © 2024 Lakhnati, Pascher and Gerken.},
keywords = {Assistive Robots, evaluation, GPT, Large language model, shared control, variable autonomy, Virtual Reality},
pubstate = {published},
tppubtype = {article}
}
In a rapidly evolving digital landscape autonomous tools and robots are becoming commonplace. Recognizing the significance of this development, this paper explores the integration of Large Language Models (LLMs) like Generative pre-trained transformer (GPT) into human-robot teaming environments to facilitate variable autonomy through the means of verbal human-robot communication. In this paper, we introduce a novel simulation framework for such a GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality (VR) setting. This system allows users to interact with simulated robot agents through natural language, each powered by individual GPT cores. By means of OpenAI’s function calling, we bridge the gap between unstructured natural language input and structured robot actions. A user study with 12 participants explores the effectiveness of GPT-4 and, more importantly, user strategies when being given the opportunity to converse in natural language within a simulated multi-robot environment. Our findings suggest that users may have preconceived expectations on how to converse with robots and seldom try to explore the actual language and cognitive capabilities of their simulated robot collaborators. Still, those users who did explore were able to benefit from a much more natural flow of communication and human-like back-and-forth. We provide a set of lessons learned for future research and technical implementations of similar systems. Copyright © 2024 Lakhnati, Pascher and Gerken.