AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2024
Liebers, C.; Pfützenreuter, N.; Auda, J.; Gruenefeld, U.; Schneegass, S.
"computer, Generate!" - Investigating User-Controlled Generation of Immersive Virtual Environments Proceedings Article
In: F., Lorig; J., Tucker; A.D., Lindstrom; F., Dignum; P., Murukannaiah; A., Theodorou; P., Yolum (Ed.): Front. Artif. Intell. Appl., pp. 213–227, IOS Press BV, 2024, ISBN: 09226389 (ISSN); 978-164368522-9 (ISBN).
Abstract | Links | BibTeX | Tags: All-at-once, Controllers, Generative AI, Human-controled scene generation, Human-Controlled Scene Generation, Immersive, Immersive Virtual Environments, In-control, Process control, Scene Generation, Three-level, User study, User-centred, Virtual Reality
@inproceedings{liebers_computer_2024,
title = {"computer, Generate!" - Investigating User-Controlled Generation of Immersive Virtual Environments},
author = {C. Liebers and N. Pfützenreuter and J. Auda and U. Gruenefeld and S. Schneegass},
editor = {Lorig F. and Tucker J. and Lindstrom A.D. and Dignum F. and Murukannaiah P. and Theodorou A. and Yolum P.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85198740032&doi=10.3233%2fFAIA240196&partnerID=40&md5=215c47e3c831cbb44e5dc10604cda8af},
doi = {10.3233/FAIA240196},
isbn = {09226389 (ISSN); 978-164368522-9 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Front. Artif. Intell. Appl.},
volume = {386},
pages = {213–227},
publisher = {IOS Press BV},
abstract = {For immersive experiences such as virtual reality, explorable worlds are often fundamental. Generative artificial intelligence looks promising to accelerate the creation of such environments. However, it remains unclear how existing interaction modalities can support user-centered world generation and how users remain in control of the process. Thus, in this paper, we present a virtual reality application to generate virtual environments and compare three common interaction modalities (voice, controller, and hands) in a pre-study (N = 18), revealing a combination of initial voice input and continued controller manipulation as best suitable. We then investigate three levels of process control (all-at-once, creation-before-manipulation, and step-by-step) in a user study (N = 27). Our results show that although all-at-once reduced the number of object manipulations, participants felt more in control when using the step-by-step approach. © 2024 The Authors.},
keywords = {All-at-once, Controllers, Generative AI, Human-controled scene generation, Human-Controlled Scene Generation, Immersive, Immersive Virtual Environments, In-control, Process control, Scene Generation, Three-level, User study, User-centred, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Rosati, R.; Senesi, P.; Lonzi, B.; Mancini, A.; Mandolini, M.
An automated CAD-to-XR framework based on generative AI and Shrinkwrap modelling for a User-Centred design approach Journal Article
In: Advanced Engineering Informatics, vol. 62, 2024, ISSN: 14740346 (ISSN).
Abstract | Links | BibTeX | Tags: Adversarial networks, Artificial intelligence, CAD-to-XR, Computer aided design models, Computer aided logic design, Computer-aided design, Computer-aided design-to-XR, Design simplification, Digital elevation model, Digital storage, Extended reality, Flow visualization, Generative adversarial networks, Guns (armament), Helmet mounted displays, Intellectual property core, Mixed reality, Photo-realistic, Shrinkfitting, Structural dynamics, User centered design, User-centered design, User-centered design approaches, User-centred, Virtual Prototyping, Work-flows
@article{rosati_automated_2024,
title = {An automated CAD-to-XR framework based on generative AI and Shrinkwrap modelling for a User-Centred design approach},
author = {R. Rosati and P. Senesi and B. Lonzi and A. Mancini and M. Mandolini},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85204897460&doi=10.1016%2fj.aei.2024.102848&partnerID=40&md5=3acce73b986bed7a9de42e6336d637ad},
doi = {10.1016/j.aei.2024.102848},
issn = {14740346 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {Advanced Engineering Informatics},
volume = {62},
abstract = {CAD-to-XR is the workflow to generate interactive Photorealistic Virtual Prototypes (iPVPs) for Extended Reality (XR) apps from Computer-Aided Design (CAD) models. This process entails modelling, texturing, and XR programming. In the literature, no automatic CAD-to-XR frameworks simultaneously manage CAD simplification and texturing. There are no examples of their adoption for User-Centered Design (UCD). Moreover, such CAD-to-XR workflows do not seize the potentialities of generative algorithms to produce synthetic images (textures). The paper presents a framework for implementing the CAD-to-XR workflow. The solution consists of a module for texture generation based on Generative Adversarial Networks (GANs). The generated texture is then managed by another module (based on Shrinkwrap modelling) to develop the iPVP by simplifying the 3D model and UV mapping the generated texture. The geometric and material data is integrated into a graphic engine, which allows for programming an interactive experience with the iPVP in XR. The CAD-to-XR framework was validated on two components (rifle stock and forend) of a sporting rifle. The solution can automate the texturing process of different product versions in shorter times (compared to a manual procedure). After each product revision, it avoids tedious and manual activities required to generate a new iPVP. The image quality metrics highlight that images are generated in a “realistic” manner (the perceived quality of generated textures is highly comparable to real images). The quality of the iPVPs, generated through the proposed framework and visualised by users through a mixed reality head-mounted display, is equivalent to traditionally designed prototypes. © 2024 The Author(s)},
keywords = {Adversarial networks, Artificial intelligence, CAD-to-XR, Computer aided design models, Computer aided logic design, Computer-aided design, Computer-aided design-to-XR, Design simplification, Digital elevation model, Digital storage, Extended reality, Flow visualization, Generative adversarial networks, Guns (armament), Helmet mounted displays, Intellectual property core, Mixed reality, Photo-realistic, Shrinkfitting, Structural dynamics, User centered design, User-centered design, User-centered design approaches, User-centred, Virtual Prototyping, Work-flows},
pubstate = {published},
tppubtype = {article}
}
Sikström, P.; Valentini, C.; Sivunen, A.; Kärkkäinen, T.
Pedagogical agents communicating and scaffolding students' learning: High school teachers' and students' perspectives Journal Article
In: Computers and Education, vol. 222, 2024, ISSN: 03601315 (ISSN).
Abstract | Links | BibTeX | Tags: Adversarial machine learning, Agents communication, Augmented Reality, Contrastive Learning, Federated learning, Human communications, Human-Machine Communication, Human-to-human communication script, Human–machine communication, Human–machine communication (HMC), pedagogical agent, Pedagogical agents, Scaffolds, Scaffolds (biology), Secondary education, Student learning, Students, Teachers', Teaching, User-centered design, User-centred, Virtual environments
@article{sikstrom_pedagogical_2024,
title = {Pedagogical agents communicating and scaffolding students' learning: High school teachers' and students' perspectives},
author = {P. Sikström and C. Valentini and A. Sivunen and T. Kärkkäinen},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85202198552&doi=10.1016%2fj.compedu.2024.105140&partnerID=40&md5=dfb4a7b6c1f6352c5cc6faac213e938f},
doi = {10.1016/j.compedu.2024.105140},
issn = {03601315 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {Computers and Education},
volume = {222},
abstract = {Pedagogical agents (PAs) communicate verbally and non-verbally with students in digital and virtual reality/augmented reality learning environments. PAs have been shown to be beneficial for learning, and generative artificial intelligence, such as large language models, can improve PAs' communication abilities significantly. K-12 education is underrepresented in learning technology research and teachers' and students' insights have not been considered when developing PA communication. The current study addresses this research gap by conducting and analyzing semi-structured, in-depth interviews with eleven high school teachers and sixteen high school students about their expectations for PAs' communication capabilities. The interviewees identified relational and task-related communication capabilities that a PA should perform to communicate effectively with students and scaffold their learning. PA communication that is simultaneously affirmative and relational can induce immediacy, foster the relationship and engagement with a PA, and support students' learning management. Additionally, the teachers and students described the activities and technological aspects that should be considered when designing conversational PAs. The study showed that teachers and students applied human-to-human communication scripts when outlining their desired PA communication characteristics. The study offers novel insights and recommendations to researchers and developers on the communicational, pedagogical, and technological aspects that must be considered when designing communicative PAs that scaffold students’ learning, and discusses the contributions on human–machine communication in education. © 2024 The Authors},
keywords = {Adversarial machine learning, Agents communication, Augmented Reality, Contrastive Learning, Federated learning, Human communications, Human-Machine Communication, Human-to-human communication script, Human–machine communication, Human–machine communication (HMC), pedagogical agent, Pedagogical agents, Scaffolds, Scaffolds (biology), Secondary education, Student learning, Students, Teachers', Teaching, User-centered design, User-centred, Virtual environments},
pubstate = {published},
tppubtype = {article}
}