AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Zhou, J.; Weber, R.; Wen, E.; Lottridge, D.
Real-Time Full-body Interaction with AI Dance Models: Responsiveness to Contemporary Dance Proceedings Article
In: Int Conf Intell User Interfaces Proc IUI, pp. 1177–1187, Association for Computing Machinery, 2025, ISBN: 979-840071306-4 (ISBN).
Abstract | Links | BibTeX | Tags: 3D modeling, Chatbots, Computer interaction, Deep learning, Deep-Learning Dance Model, Design of Human-Computer Interaction, Digital elevation model, Generative AI, Input output programs, Input sequence, Interactivity, Motion capture, Motion tracking, Movement analysis, Output sequences, Problem oriented languages, Real- time, Text mining, Three dimensional computer graphics, User input, Virtual environments, Virtual Reality
@inproceedings{zhou_real-time_2025,
title = {Real-Time Full-body Interaction with AI Dance Models: Responsiveness to Contemporary Dance},
author = {J. Zhou and R. Weber and E. Wen and D. Lottridge},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105001922427&doi=10.1145%2f3708359.3712077&partnerID=40&md5=cea9213198220480b80b7a4840d26ccc},
doi = {10.1145/3708359.3712077},
isbn = {979-840071306-4 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Int Conf Intell User Interfaces Proc IUI},
pages = {1177–1187},
publisher = {Association for Computing Machinery},
abstract = {Interactive AI chatbots put the power of Large-Language Models (LLMs) into people's hands; it is this interactivity that fueled explosive worldwide influence. In the generative dance space, however, there are few deep-learning-based generative dance models built with interactivity in mind. The release of the AIST++ dance dataset in 2021 led to an uptick of capabilities in generative dance models. Whether these models could be adapted to support interactivity and how well this approach will work is not known. In this study, we explore the capabilities of existing generative dance models for motion-to-motion synthesis on real-time, full-body motion-captured contemporary dance data. We identify an existing model that we adapted to support interactivity: the Bailando++ model, which is trained on the AIST++ dataset and was modified to take music and a motion sequence as input parameters in an interactive loop. We worked with two professional contemporary choreographers and dancers to record and curate a diverse set of 203 motion-captured dance sequences as a set of "user inputs"captured through the Optitrack high-precision motion capture 3D tracking system. We extracted 17 quantitative movement features from the motion data using the well-established Laban Movement Analysis theory, which allowed for quantitative comparisons of inter-movement correlations, which we used for clustering input data and comparing input and output sequences. A total of 10 pieces of music were used to generate a variety of outputs using the adapted Bailando++ model. We found that, on average, the generated output motion achieved only moderate correlations to the user input, with some exceptions of movement and music pairs achieving high correlation. The high-correlation generated output sequences were deemed responsive and relevant co-creations in relation to the input sequences. We discuss implications for interactive generative dance agents, where the use of 3D joint coordinate data should be used over SMPL parameters for ease of real-time generation, and how the use of Laban Movement Analysis could be used to extract useful features and fine-tune deep-learning models. © 2025 Copyright held by the owner/author(s).},
keywords = {3D modeling, Chatbots, Computer interaction, Deep learning, Deep-Learning Dance Model, Design of Human-Computer Interaction, Digital elevation model, Generative AI, Input output programs, Input sequence, Interactivity, Motion capture, Motion tracking, Movement analysis, Output sequences, Problem oriented languages, Real- time, Text mining, Three dimensional computer graphics, User input, Virtual environments, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
2024
Salloum, A.; Alfaisal, R.; Salloum, S. A.
Revolutionizing Medical Education: Empowering Learning with ChatGPT Book Section
In: Studies in Big Data, vol. 144, pp. 79–90, Springer Science and Business Media Deutschland GmbH, 2024, ISBN: 21976503 (ISSN).
Abstract | Links | BibTeX | Tags: Abstracting, AI integration, ChatGPT, Education, Human like, Interactivity, Language Model, Learning platform, Learning platforms, Medical education, Metaverse, Metaverses, Paradigm shifts, Personalizations, Technological advancement
@incollection{salloum_revolutionizing_2024,
title = {Revolutionizing Medical Education: Empowering Learning with ChatGPT},
author = {A. Salloum and R. Alfaisal and S. A. Salloum},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85191302844&doi=10.1007%2f978-3-031-52280-2_6&partnerID=40&md5=a5325b8e43460906174a3c7a2c383e1a},
doi = {10.1007/978-3-031-52280-2_6},
isbn = {21976503 (ISSN)},
year = {2024},
date = {2024-01-01},
booktitle = {Studies in Big Data},
volume = {144},
pages = {79–90},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {The landscape of medical education is undergoing a paradigm shift driven by technological advancements. This abstract explores the potential of ChatGPT, an advanced AI language model developed by OpenAI, in revolutionizing medical education. ChatGPT’s capacity to understand and generate human-like text opens doors to interactive, personalized, and adaptive learning experiences that address the evolving demands of medical training. Medical education traditionally relies on didactic approaches that often lack interactivity and personalization. ChatGPT addresses this limitation by introducing a conversational AI-driven dimension to medical learning. Learners can engage with ChatGPT in natural language, seeking explanations, asking questions, and clarifying doubts. This adaptive interactivity mirrors the dynamic nature of medical practice and fosters critical thinking skills essential for medical professionals. Furthermore, ChatGPT augments educators’ roles by assisting in content creation, formative assessments, and immediate feedback delivery. This empowers educators to focus on higher-order facilitation and mentorship, enriching the learning journey. However, responsible integration of ChatGPT into medical education demands careful curation of accurate medical content and validation against trusted sources. Ethical considerations related to AI-generated content and potential biases also warrant attention. This abstract underscores the transformative potential of ChatGPT in reshaping medical education. By creating an environment of engagement, adaptability, and personalization, ChatGPT paves the way for a dynamic and empowered medical learning ecosystem that aligns with the demands of modern healthcare. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.},
keywords = {Abstracting, AI integration, ChatGPT, Education, Human like, Interactivity, Language Model, Learning platform, Learning platforms, Medical education, Metaverse, Metaverses, Paradigm shifts, Personalizations, Technological advancement},
pubstate = {published},
tppubtype = {incollection}
}
Liu, P.; Kitson, A.; Picard-Deland, C.; Carr, M.; Liu, S.; Lc, R.; Zhu-Tian, C.
Virtual Dream Reliving: Exploring Generative AI in Immersive Environment for Dream Re-experiencing Proceedings Article
In: Conf Hum Fact Comput Syst Proc, Association for Computing Machinery, 2024, ISBN: 979-840070331-7 (ISBN).
Abstract | Links | BibTeX | Tags: Dream Re-experiencing, Dreamwork Engineering, Fundamental component, Generative AI, Immersive environment, Interactivity, Key elements, Personal Insight, Scientific Creativity, Virtual Reality, Virtual-reality environment
@inproceedings{liu_virtual_2024,
title = {Virtual Dream Reliving: Exploring Generative AI in Immersive Environment for Dream Re-experiencing},
author = {P. Liu and A. Kitson and C. Picard-Deland and M. Carr and S. Liu and R. Lc and C. Zhu-Tian},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85194182384&doi=10.1145%2f3613905.3644054&partnerID=40&md5=c6de3def82e91544f194a0fd465a636e},
doi = {10.1145/3613905.3644054},
isbn = {979-840070331-7 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Conf Hum Fact Comput Syst Proc},
publisher = {Association for Computing Machinery},
abstract = {Dreaming is a fundamental component of the human experience. Modern-day psychologists and neuroscientists use "dreamwork" to describe a variety of strategies that deepen and engage with dreams. Re-experiencing the dream as if reliving the memory, feelings, and bodily sensations from the dream is a key element shared by many dreamwork practices. In this paper, we propose the concept of "dreamwork engineering" by creating a system enabling dream re-experiencing in a virtual reality environment through generative AI. Through an autoethnographic study, the first author documented his own dreams and relived his dream experiences for two weeks. Based on our results, we propose a technologyaided dreamwork framework, where technology could potentially augment traditional dreamwork methods through spatiality and movement, interactivity and abstract anchor. We further highlight the collaborative role of technology in dreamwork and advocate that the scientific community could also benefit from dreaming and dreamwork for scientific creativity. © 2024 Association for Computing Machinery. All rights reserved.},
keywords = {Dream Re-experiencing, Dreamwork Engineering, Fundamental component, Generative AI, Immersive environment, Interactivity, Key elements, Personal Insight, Scientific Creativity, Virtual Reality, Virtual-reality environment},
pubstate = {published},
tppubtype = {inproceedings}
}
Kumaar, D. Prasanna; Abisha, D.; Christy, J. Ida; Evanjalin, R. Navedha; Rajesh, P.; Haris, K. Mohamed
WRISTVIEW: Augmented Reality and Generative AI Integration for Enhanced Online Shopping Experiences Proceedings Article
In: Int. Conf. I-SMAC (IoT Soc., Mob., Anal. Cloud), I-SMAC - Proc., pp. 1115–1120, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-835037642-5 (ISBN).
Abstract | Links | BibTeX | Tags: Augmented Reality, Chatbots, Conversational AI, Customer satisfaction, Decision making, E-commerce Innovation, Growing demand, Immersive, Interactivity, Online shopping, Personalizations, Personalized Shopping Experience, Purchasing, Sales, Virtual environments, Virtual Try-On
@inproceedings{prasanna_kumaar_wristview_2024,
title = {WRISTVIEW: Augmented Reality and Generative AI Integration for Enhanced Online Shopping Experiences},
author = {D. Prasanna Kumaar and D. Abisha and J. Ida Christy and R. Navedha Evanjalin and P. Rajesh and K. Mohamed Haris},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85208587489&doi=10.1109%2fI-SMAC61858.2024.10714789&partnerID=40&md5=e7742c8808cb551b17efe9ac3efeb961},
doi = {10.1109/I-SMAC61858.2024.10714789},
isbn = {979-835037642-5 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Int. Conf. I-SMAC (IoT Soc., Mob., Anal. Cloud), I-SMAC - Proc.},
pages = {1115–1120},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {The traditional retail experience for purchasing watches lacks the interactivity and personalization that modern consumers seek. With the increasing shift towards online shopping platforms, there is a growing demand for an engaging and immersive virtual experience that allows customers to explore and try on watches. The absence of expert guidance during the online shopping process often results in less informed decision-making. To address these challenges, WRISTVIEW presents an innovative solution that integrates augmented reality (AR) and a conversational Generative AI (GENAI) chatbot. The AR component enables users to virtually try on watches, providing a realistic and interactive experience. The GENAI chatbot enhances this experience by offering expert advice, answering queries, and guiding users through the watch shopping journey, thereby creating a more personalized and informative shopping process. The objective of this research is to bridge the gap between the traditional in-store watch shopping experience and the online environment, ensuring that customers can make well-informed and satisfying purchase decisions in a virtual setting. The development, implementation, and potential impact of combining AR and GENAI technologies to transform the online watch shopping experience are discussed. © 2024 IEEE.},
keywords = {Augmented Reality, Chatbots, Conversational AI, Customer satisfaction, Decision making, E-commerce Innovation, Growing demand, Immersive, Interactivity, Online shopping, Personalizations, Personalized Shopping Experience, Purchasing, Sales, Virtual environments, Virtual Try-On},
pubstate = {published},
tppubtype = {inproceedings}
}