AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Kai, W. -H.; Xing, K. -X.
Video-driven musical composition using large language model with memory-augmented state space Journal Article
In: Visual Computer, vol. 41, no. 5, pp. 3345–3357, 2025, ISSN: 01782789 (ISSN).
Abstract | Links | BibTeX | Tags: 'current, Associative storage, Augmented Reality, Augmented state space, Computer simulation languages, Computer system recovery, Distributed computer systems, HTTP, Language Model, Large language model, Long-term video-to-music generation, Mamba, Memory architecture, Memory-augmented, Modeling languages, Music, Musical composition, Natural language processing systems, Object oriented programming, Performance, Problem oriented languages, State space, State-space
@article{kai_video-driven_2025,
title = {Video-driven musical composition using large language model with memory-augmented state space},
author = {W. -H. Kai and K. -X. Xing},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105001073242&doi=10.1007%2fs00371-024-03606-w&partnerID=40&md5=7ea24f13614a9a24caf418c37a10bd8c},
doi = {10.1007/s00371-024-03606-w},
issn = {01782789 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {Visual Computer},
volume = {41},
number = {5},
pages = {3345–3357},
abstract = {The current landscape of research leveraging large language models (LLMs) is experiencing a surge. Many works harness the powerful reasoning capabilities of these models to comprehend various modalities, such as text, speech, images, videos, etc. However, the research work on LLms for music inspiration is still in its infancy. To fill the gap in this field and break through the dilemma that LLMs can only understand short videos with limited frames, we propose a large language model with state space for long-term video-to-music generation. To capture long-range dependency and maintaining high performance, while further decrease the computing cost, our overall network includes the Enhanced Video Mamba, which incorporates continuous moving window partitioning and local feature augmentation, and a long-term memory bank that captures and aggregates historical video information to mitigate information loss in long sequences. This framework achieves both subquadratic-time computation and near-linear memory complexity, enabling effective long-term video-to-music generation. We conduct a thorough evaluation of our proposed framework. The experimental results demonstrate that our model achieves or surpasses the performance of the current state-of-the-art models. Our code released on https://github.com/kai211233/S2L2-V2M. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024.},
keywords = {'current, Associative storage, Augmented Reality, Augmented state space, Computer simulation languages, Computer system recovery, Distributed computer systems, HTTP, Language Model, Large language model, Long-term video-to-music generation, Mamba, Memory architecture, Memory-augmented, Modeling languages, Music, Musical composition, Natural language processing systems, Object oriented programming, Performance, Problem oriented languages, State space, State-space},
pubstate = {published},
tppubtype = {article}
}
Angelopoulos, J.; Manettas, C.; Alexopoulos, K.
Industrial Maintenance Optimization Based on the Integration of Large Language Models (LLM) and Augmented Reality (AR) Proceedings Article
In: K., Alexopoulos; S., Makris; P., Stavropoulos (Ed.): Lect. Notes Mech. Eng., pp. 197–205, Springer Science and Business Media Deutschland GmbH, 2025, ISBN: 21954356 (ISSN); 978-303186488-9 (ISBN).
Abstract | Links | BibTeX | Tags: Augmented Reality, Competition, Cost reduction, Critical path analysis, Crushed stone plants, Generative AI, generative artificial intelligence, Human expertise, Industrial equipment, Industrial maintenance, Language Model, Large language model, Maintenance, Maintenance optimization, Maintenance procedures, Manufacturing data processing, Potential errors, Problem oriented languages, Scheduled maintenance, Shopfloors, Solar power plants
@inproceedings{angelopoulos_industrial_2025,
title = {Industrial Maintenance Optimization Based on the Integration of Large Language Models (LLM) and Augmented Reality (AR)},
author = {J. Angelopoulos and C. Manettas and K. Alexopoulos},
editor = {Alexopoulos K. and Makris S. and Stavropoulos P.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105001421726&doi=10.1007%2f978-3-031-86489-6_20&partnerID=40&md5=63be31b9f4dda4aafd6a641630506c09},
doi = {10.1007/978-3-031-86489-6_20},
isbn = {21954356 (ISSN); 978-303186488-9 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Lect. Notes Mech. Eng.},
pages = {197–205},
publisher = {Springer Science and Business Media Deutschland GmbH},
abstract = {Traditional maintenance procedures often rely on manual data processing and human expertise, leading to inefficiencies and potential errors. In the context of Industry 4.0 several digital technologies, such as Artificial Intelligence (AI), Big Data Analytics (BDA), and eXtended Reality (XR) have been developed and are constantly being integrated in a plethora of manufacturing activities (including industrial maintenance), in an attempt to minimize human error, facilitate shop floor technicians, reduce costs as well as reduce equipment downtimes. The latest developments in the field of AI point towards Large Language Models (LLM) which can communicate with human operators in an intuitive manner. On the other hand, Augmented Reality, as part of XR technologies, offers useful functionalities for improving user perception and interaction with modern, complex industrial equipment. Therefore, the context of this research work lies in the development and training of an LLM in order to provide suggestions and actionable items for the mitigation of unforeseen events (e.g. equipment breakdowns), in order to facilitate shop-floor technicians during their everyday tasks. Paired with AR visualizations over the physical environment, the technicians will get instructions for performing tasks and checks on the industrial equipment in a manner similar to human-to-human communication. The functionality of the proposed framework extends to the integration of modules for exchanging information with the engineering department towards the scheduling of Maintenance and Repair Operations (MRO) as well as the creation of a repository of historical data in order to constantly retrain and optimize the LLM. © The Author(s) 2025.},
keywords = {Augmented Reality, Competition, Cost reduction, Critical path analysis, Crushed stone plants, Generative AI, generative artificial intelligence, Human expertise, Industrial equipment, Industrial maintenance, Language Model, Large language model, Maintenance, Maintenance optimization, Maintenance procedures, Manufacturing data processing, Potential errors, Problem oriented languages, Scheduled maintenance, Shopfloors, Solar power plants},
pubstate = {published},
tppubtype = {inproceedings}
}
Buldu, K. B.; Özdel, S.; Lau, K. H. Carrie; Wang, M.; Saad, D.; Schönborn, S.; Boch, A.; Kasneci, E.; Bozkir, E.
CUIfy the XR: An Open-Source Package to Embed LLM-Powered Conversational Agents in XR Proceedings Article
In: Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR, pp. 192–197, Institute of Electrical and Electronics Engineers Inc., 2025, ISBN: 979-833152157-8 (ISBN).
Abstract | Links | BibTeX | Tags: Augmented Reality, Computational Linguistics, Conversational user interface, conversational user interfaces, Extended reality, Head-mounted-displays, Helmet mounted displays, Language Model, Large language model, large language models, Non-player character, non-player characters, Open source software, Personnel training, Problem oriented languages, Speech models, Speech-based interaction, Text to speech, Unity, Virtual environments, Virtual Reality
@inproceedings{buldu_cuify_2025,
title = {CUIfy the XR: An Open-Source Package to Embed LLM-Powered Conversational Agents in XR},
author = {K. B. Buldu and S. Özdel and K. H. Carrie Lau and M. Wang and D. Saad and S. Schönborn and A. Boch and E. Kasneci and E. Bozkir},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105000229165&doi=10.1109%2fAIxVR63409.2025.00037&partnerID=40&md5=837b0e3425d2e5a9358bbe6c8ecb5754},
doi = {10.1109/AIxVR63409.2025.00037},
isbn = {979-833152157-8 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Proc. - IEEE Int. Conf. Artif. Intell. Ext. Virtual Real., AIxVR},
pages = {192–197},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Recent developments in computer graphics, machine learning, and sensor technologies enable numerous opportunities for extended reality (XR) setups for everyday life, from skills training to entertainment. With large corporations offering affordable consumer-grade head-mounted displays (HMDs), XR will likely become pervasive, and HMDs will develop as personal devices like smartphones and tablets. However, having intelligent spaces and naturalistic interactions in XR is as important as tech-nological advances so that users grow their engagement in virtual and augmented spaces. To this end, large language model (LLM)-powered non-player characters (NPCs) with speech-to-text (STT) and text-to-speech (TTS) models bring significant advantages over conventional or pre-scripted NPCs for facilitating more natural conversational user interfaces (CUIs) in XR. This paper provides the community with an open-source, customizable, extendable, and privacy-aware Unity package, CUIfy, that facili-tates speech-based NPC-user interaction with widely used LLMs, STT, and TTS models. Our package also supports multiple LLM-powered NPCs per environment and minimizes latency between different computational models through streaming to achieve us-able interactions between users and NPCs. We publish our source code in the following repository: https://gitlab.lrz.de/hctl/cuify © 2025 IEEE.},
keywords = {Augmented Reality, Computational Linguistics, Conversational user interface, conversational user interfaces, Extended reality, Head-mounted-displays, Helmet mounted displays, Language Model, Large language model, large language models, Non-player character, non-player characters, Open source software, Personnel training, Problem oriented languages, Speech models, Speech-based interaction, Text to speech, Unity, Virtual environments, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Zhou, J.; Weber, R.; Wen, E.; Lottridge, D.
Real-Time Full-body Interaction with AI Dance Models: Responsiveness to Contemporary Dance Proceedings Article
In: Int Conf Intell User Interfaces Proc IUI, pp. 1177–1187, Association for Computing Machinery, 2025, ISBN: 979-840071306-4 (ISBN).
Abstract | Links | BibTeX | Tags: 3D modeling, Chatbots, Computer interaction, Deep learning, Deep-Learning Dance Model, Design of Human-Computer Interaction, Digital elevation model, Generative AI, Input output programs, Input sequence, Interactivity, Motion capture, Motion tracking, Movement analysis, Output sequences, Problem oriented languages, Real- time, Text mining, Three dimensional computer graphics, User input, Virtual environments, Virtual Reality
@inproceedings{zhou_real-time_2025,
title = {Real-Time Full-body Interaction with AI Dance Models: Responsiveness to Contemporary Dance},
author = {J. Zhou and R. Weber and E. Wen and D. Lottridge},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105001922427&doi=10.1145%2f3708359.3712077&partnerID=40&md5=cea9213198220480b80b7a4840d26ccc},
doi = {10.1145/3708359.3712077},
isbn = {979-840071306-4 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Int Conf Intell User Interfaces Proc IUI},
pages = {1177–1187},
publisher = {Association for Computing Machinery},
abstract = {Interactive AI chatbots put the power of Large-Language Models (LLMs) into people's hands; it is this interactivity that fueled explosive worldwide influence. In the generative dance space, however, there are few deep-learning-based generative dance models built with interactivity in mind. The release of the AIST++ dance dataset in 2021 led to an uptick of capabilities in generative dance models. Whether these models could be adapted to support interactivity and how well this approach will work is not known. In this study, we explore the capabilities of existing generative dance models for motion-to-motion synthesis on real-time, full-body motion-captured contemporary dance data. We identify an existing model that we adapted to support interactivity: the Bailando++ model, which is trained on the AIST++ dataset and was modified to take music and a motion sequence as input parameters in an interactive loop. We worked with two professional contemporary choreographers and dancers to record and curate a diverse set of 203 motion-captured dance sequences as a set of "user inputs"captured through the Optitrack high-precision motion capture 3D tracking system. We extracted 17 quantitative movement features from the motion data using the well-established Laban Movement Analysis theory, which allowed for quantitative comparisons of inter-movement correlations, which we used for clustering input data and comparing input and output sequences. A total of 10 pieces of music were used to generate a variety of outputs using the adapted Bailando++ model. We found that, on average, the generated output motion achieved only moderate correlations to the user input, with some exceptions of movement and music pairs achieving high correlation. The high-correlation generated output sequences were deemed responsive and relevant co-creations in relation to the input sequences. We discuss implications for interactive generative dance agents, where the use of 3D joint coordinate data should be used over SMPL parameters for ease of real-time generation, and how the use of Laban Movement Analysis could be used to extract useful features and fine-tune deep-learning models. © 2025 Copyright held by the owner/author(s).},
keywords = {3D modeling, Chatbots, Computer interaction, Deep learning, Deep-Learning Dance Model, Design of Human-Computer Interaction, Digital elevation model, Generative AI, Input output programs, Input sequence, Interactivity, Motion capture, Motion tracking, Movement analysis, Output sequences, Problem oriented languages, Real- time, Text mining, Three dimensional computer graphics, User input, Virtual environments, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
2024
Peretti, A.; Mazzola, M.; Capra, L.; Piazzola, M.; Carlevaro, C.
Seamless Human-Robot Interaction Through a Distributed Zero-Trust Architecture and Advanced User Interfaces Proceedings Article
In: C., Secchi; L., Marconi (Ed.): Springer. Proc. Adv. Robot., pp. 92–95, Springer Nature, 2024, ISBN: 25111256 (ISSN); 978-303176427-1 (ISBN).
Abstract | Links | BibTeX | Tags: Advanced user interfaces, Digital Twins, HRC, Human Robot Interaction, Human-Robot Collaboration, Humans-robot interactions, Industrial robots, Industry 4.0, Intelligent robots, Interaction platform, Language Model, Large language model, LLM, Problem oriented languages, Robot Operating System, Robot operating system 2, Robot-robot collaboration, ROS2, RRC, Wages, XR, ZTA
@inproceedings{peretti_seamless_2024,
title = {Seamless Human-Robot Interaction Through a Distributed Zero-Trust Architecture and Advanced User Interfaces},
author = {A. Peretti and M. Mazzola and L. Capra and M. Piazzola and C. Carlevaro},
editor = {Secchi C. and Marconi L.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85216090556&doi=10.1007%2f978-3-031-76428-8_18&partnerID=40&md5=9f58281f8a8c034fb45fed610ce64bd2},
doi = {10.1007/978-3-031-76428-8_18},
isbn = {25111256 (ISSN); 978-303176427-1 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Springer. Proc. Adv. Robot.},
volume = {33 SPAR},
pages = {92–95},
publisher = {Springer Nature},
abstract = {The proposed work presents a novel interaction platform designed to address the shortage of skilled workers in the labor market, facilitating the seamless integration of robotics and advanced user interfaces such as eXtended Reality (XR) to optimize Human-Robot Collaboration (HRC) as well as Robot-Robot Collaboration (RRC) in an Industry 4.0 scenario. One of the most challenging situations is to optimize and simplify the collaborations of humans and robots to decrease or avoid system slowdowns, blocks, or dangerous situations for both users and robots. The advent of the LLMs (Large Language Model) have been breakthrough the whole IT environment because they perform well in different scenario from human text generation to autonomous systems management. Due to their malleability, LLMs have a primary role for Human-Robot collaboration processes. For this reason, the platform comprises three key technical components: a distributed zero-trust architecture, a virtual avatar, and digital twins of robots powered by the Robot Operating System 2 (ROS2) platform. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.},
keywords = {Advanced user interfaces, Digital Twins, HRC, Human Robot Interaction, Human-Robot Collaboration, Humans-robot interactions, Industrial robots, Industry 4.0, Intelligent robots, Interaction platform, Language Model, Large language model, LLM, Problem oriented languages, Robot Operating System, Robot operating system 2, Robot-robot collaboration, ROS2, RRC, Wages, XR, ZTA},
pubstate = {published},
tppubtype = {inproceedings}
}
Bandara, E.; Foytik, P.; Shetty, S.; Hassanzadeh, A.
Generative-AI(with Custom-Trained Meta's Llama2 LLM), Blockchain, NFT, Federated Learning and PBOM Enabled Data Security Architecture for Metaverse on 5G/6G Environment Proceedings Article
In: Proc. - IEEE Int. Conf. Mob. Ad-Hoc Smart Syst., MASS, pp. 118–124, Institute of Electrical and Electronics Engineers Inc., 2024, ISBN: 979-835036399-9 (ISBN).
Abstract | Links | BibTeX | Tags: 5G, 6G, Adversarial machine learning, Bill of materials, Block-chain, Blockchain, Curricula, Data privacy, Distance education, Federated learning, Generative adversarial networks, Generative-AI, Hardware security, Llama2, LLM, Medium access control, Metaverse, Metaverses, Network Security, Nft, Non-fungible token, Personnel training, Problem oriented languages, Reference architecture, Steganography
@inproceedings{bandara_generative-aicustom-trained_2024,
title = {Generative-AI(with Custom-Trained Meta's Llama2 LLM), Blockchain, NFT, Federated Learning and PBOM Enabled Data Security Architecture for Metaverse on 5G/6G Environment},
author = {E. Bandara and P. Foytik and S. Shetty and A. Hassanzadeh},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85210243120&doi=10.1109%2fMASS62177.2024.00026&partnerID=40&md5=70d21ac1e9c7b886da14825376919cac},
doi = {10.1109/MASS62177.2024.00026},
isbn = {979-835036399-9 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Proc. - IEEE Int. Conf. Mob. Ad-Hoc Smart Syst., MASS},
pages = {118–124},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {The Metaverse is an integrated network of 3D virtual worlds accessible through a virtual reality headset. Its impact on data privacy and security is increasingly recognized as a major concern. There is a growing interest in developing a reference architecture that describes the four core aspects of its data: acquisition, storage, sharing, and interoperability. Establishing a secure data architecture is imperative to manage users' personal data and facilitate trusted AR/VR and AI/ML solutions within the Metaverse. This paper details a reference architecture empowered by Generative-AI, Blockchain, Federated Learning, and Non-Fungible Tokens (NFTs). Within this archi-tecture, various resource providers collaborate via the blockchain network. Handling personal user data and resource provider identities is executed through a Self-Sovereign Identity-enabled privacy-preserving framework. AR/NR devices in the Metaverse are represented as NFT tokens available for user purchase. Software updates and supply-chain verification for these devices are managed using a Software Bill of Materials (SBOM) and a Pipeline Bill of Materials (PBOM) verification system. Moreover, a custom-trained Llama2 LLM from Meta has been integrated to generate PBOMs for AR/NR devices' software updates, thereby preventing malware intrusions and data breaches. This Llama2-13B LLM has been quantized and fine-tuned using Qlora to ensure optimal performance on consumer-grade hardware. The provenance of AI/ML models used in the Metaverse is encapsu-lated as Model Card objects, allowing external parties to audit and verify them, thus mitigating adversarial learning attacks within these models. To the best of our knowledge, this is the very first research effort aimed at standardizing PBOM schemas and integrating Language Model algorithms for the generation of PBOMs. Additionally, a proposed mechanism facilitates different AI/ML providers in training their machine learning models using a privacy-preserving federated learning approach. Authorization of communications among AR/VR devices in the Metaverse is conducted through a Zero-Trust security-enabled rule engine. A system testbed has been implemented within a 5G environment, utilizing Ericsson new Radio with Open5GS 5G core. © 2024 IEEE.},
keywords = {5G, 6G, Adversarial machine learning, Bill of materials, Block-chain, Blockchain, Curricula, Data privacy, Distance education, Federated learning, Generative adversarial networks, Generative-AI, Hardware security, Llama2, LLM, Medium access control, Metaverse, Metaverses, Network Security, Nft, Non-fungible token, Personnel training, Problem oriented languages, Reference architecture, Steganography},
pubstate = {published},
tppubtype = {inproceedings}
}
Sehgal, V.; Sekaran, N.
Virtual Recording Generation Using Generative AI and Carla Simulator Proceedings Article
In: SAE Techni. Paper., SAE International, 2024, ISBN: 01487191 (ISSN).
Abstract | Links | BibTeX | Tags: Access control, Air cushion vehicles, Associative storage, Augmented Reality, Automobile driver simulators, Automobile drivers, Automobile simulators, Automobile testing, Autonomous Vehicles, benchmarking, Computer testing, Condition, Continuous functions, Dynamic random access storage, Formal concept analysis, HDCP, Language Model, Luminescent devices, Network Security, Operational test, Operational use, Problem oriented languages, Randomisation, Real-world drivings, Sailing vessels, Ships, Test condition, UNIX, Vehicle modelling, Virtual addresses
@inproceedings{sehgal_virtual_2024,
title = {Virtual Recording Generation Using Generative AI and Carla Simulator},
author = {V. Sehgal and N. Sekaran},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85213320680&doi=10.4271%2f2024-28-0261&partnerID=40&md5=37a924cf9beda31f2c23b3a2cdf575d2},
doi = {10.4271/2024-28-0261},
isbn = {01487191 (ISSN)},
year = {2024},
date = {2024-01-01},
booktitle = {SAE Techni. Paper.},
publisher = {SAE International},
abstract = {To establish and validate new systems incorporated into next generation vehicles, it is important to understand actual scenarios which the autonomous vehicles will likely encounter. Consequently, to do this, it is important to run Field Operational Tests (FOT). FOT is undertaken with many vehicles and large acquisition areas ensuing the capability and suitability of a continuous function, thus guaranteeing the randomization of test conditions. FOT and Use case(a software testing technique designed to ensure that the system under test meets and exceeds the stakeholders' expectations) scenario recordings capture is very expensive, due to the amount of necessary material (vehicles, measurement equipment/objectives, headcount, data storage capacity/complexity, trained drivers/professionals) and all-time robust working vehicle setup is not always available, moreover mileage is directly proportional to time, along with that it cannot be scaled up due to physical limitations. During the early development phase, ground truth data is not available, and data that can be reused from other projects may not match 100% with current project requirements. All event scenarios/weather conditions cannot be ensured during recording capture, in such cases synthetic/virtual recording comes very handy which can accurately mimic real conditions on test bench and can very well address the before mentioned constraints. Car Learning to Act (CARLA) [1] is an autonomous open-source driving simulator, used for the development, training, and validation of autonomous driving systems is extended for generation of synthetic/virtual data/recordings, by integrating Generative Artificial Intelligence (Gen AI), particularly Generative Adversarial Networks (GANs) [2] and Retrieval Augmented Generation (RAG) [3] which are deep learning models. The process of creating synthetic data using vehicle models becomes more efficient and reliable as Gen AI can hold and reproduce much more data in scenario development than a developer or tester. A Large Language Model (LLM) [4] takes user input in the form of user prompts and generate scenarios that are used to produce a vast amount of high-quality, distinct, and realistic driving scenarios that closely resemble real-world driving data. Gen AI [5] empowers the user to generate not only dynamic environment conditions (such as different weather conditions and lighting conditions) but also dynamic elements like the behavior of other vehicles and pedestrians. Synthetic/Virtual recording [6] generated using Gen AI can be used to train and validate virtual vehicle models, FOT/Use case data which is used to indirectly prove real-world performance of functionality of tasks such as object detection, object recognition, image segmentation, and decision-making algorithms in autonomous vehicles. Augmenting LLM with CARLA involves training generative models on real-world driving data using RAG which allows the model to generate new, synthetic instances that resemble real-world conditions/scenarios. © 2024 SAE International. All Rights Reserved.},
keywords = {Access control, Air cushion vehicles, Associative storage, Augmented Reality, Automobile driver simulators, Automobile drivers, Automobile simulators, Automobile testing, Autonomous Vehicles, benchmarking, Computer testing, Condition, Continuous functions, Dynamic random access storage, Formal concept analysis, HDCP, Language Model, Luminescent devices, Network Security, Operational test, Operational use, Problem oriented languages, Randomisation, Real-world drivings, Sailing vessels, Ships, Test condition, UNIX, Vehicle modelling, Virtual addresses},
pubstate = {published},
tppubtype = {inproceedings}
}