AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
Here you can find the complete list of our publications.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2024
Zhang, Q.; Naradowsky, J.; Miyao, Y.
Self-Emotion Blended Dialogue Generation in Social Simulation Agents Proceedings Article
In: Kawahara, T.; Demberg, V.; Ultes, S.; Inoue, K.; Mehri, S.; Howcroft, D.; Komatani, K. (Ed.): pp. 228–247, Association for Computational Linguistics (ACL), 2024, ISBN: 9798891761612 (ISBN).
Abstract | Links | BibTeX | Tags: Agent behavior, Agents, Computational Linguistics, Decision making, Decisions makings, Dialogue generations, Dialogue strategy, Emotional state, Language Model, Model-driven, Natural language processing systems, Simulation framework, Social psychology, Social simulations, Speech processing, Virtual Reality, Virtual simulation environments
@inproceedings{zhang_self-emotion_2024,
title = {Self-Emotion Blended Dialogue Generation in Social Simulation Agents},
author = {Q. Zhang and J. Naradowsky and Y. Miyao},
editor = {T. Kawahara and V. Demberg and S. Ultes and K. Inoue and S. Mehri and D. Howcroft and K. Komatani},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105017744334&doi=10.18653%2Fv1%2F2024.sigdial-1.21&partnerID=40&md5=f185cfb5554eabfa85e6e956dfe6848e},
doi = {10.18653/v1/2024.sigdial-1.21},
isbn = {9798891761612 (ISBN)},
year = {2024},
date = {2024-01-01},
pages = {228–247},
publisher = {Association for Computational Linguistics (ACL)},
abstract = {When engaging in conversations, dialogue agents in a virtual simulation environment may exhibit their own emotional states that are unrelated to the immediate conversational context, a phenomenon known as self-emotion. This study explores how such self-emotion affects the agents' behaviors in dialogue strategies and decision-making within a large language model (LLM)-driven simulation framework. In a dialogue strategy prediction experiment, we analyze the dialogue strategy choices employed by agents both with and without self-emotion, comparing them to those of humans. The results show that incorporating self-emotion helps agents exhibit more human-like dialogue strategies. In an independent experiment comparing the performance of models fine-tuned on GPT-4 generated dialogue datasets, we demonstrate that self-emotion can lead to better overall naturalness and humanness. Finally, in a virtual simulation environment where agents have discussions on multiple topics, we show that self-emotion of agents can significantly influence the decision-making process of the agents, leading to approximately a 50% change in decisions. © 2025 Elsevier B.V., All rights reserved.},
keywords = {Agent behavior, Agents, Computational Linguistics, Decision making, Decisions makings, Dialogue generations, Dialogue strategy, Emotional state, Language Model, Model-driven, Natural language processing systems, Simulation framework, Social psychology, Social simulations, Speech processing, Virtual Reality, Virtual simulation environments},
pubstate = {published},
tppubtype = {inproceedings}
}
When engaging in conversations, dialogue agents in a virtual simulation environment may exhibit their own emotional states that are unrelated to the immediate conversational context, a phenomenon known as self-emotion. This study explores how such self-emotion affects the agents' behaviors in dialogue strategies and decision-making within a large language model (LLM)-driven simulation framework. In a dialogue strategy prediction experiment, we analyze the dialogue strategy choices employed by agents both with and without self-emotion, comparing them to those of humans. The results show that incorporating self-emotion helps agents exhibit more human-like dialogue strategies. In an independent experiment comparing the performance of models fine-tuned on GPT-4 generated dialogue datasets, we demonstrate that self-emotion can lead to better overall naturalness and humanness. Finally, in a virtual simulation environment where agents have discussions on multiple topics, we show that self-emotion of agents can significantly influence the decision-making process of the agents, leading to approximately a 50% change in decisions. © 2025 Elsevier B.V., All rights reserved.