AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2024
Sehgal, V.; Sekaran, N.
Virtual Recording Generation Using Generative AI and Carla Simulator Proceedings Article
In: SAE Techni. Paper., SAE International, 2024, ISBN: 01487191 (ISSN).
Abstract | Links | BibTeX | Tags: Access control, Air cushion vehicles, Associative storage, Augmented Reality, Automobile driver simulators, Automobile drivers, Automobile simulators, Automobile testing, Autonomous Vehicles, benchmarking, Computer testing, Condition, Continuous functions, Dynamic random access storage, Formal concept analysis, HDCP, Language Model, Luminescent devices, Network Security, Operational test, Operational use, Problem oriented languages, Randomisation, Real-world drivings, Sailing vessels, Ships, Test condition, UNIX, Vehicle modelling, Virtual addresses
@inproceedings{sehgal_virtual_2024,
title = {Virtual Recording Generation Using Generative AI and Carla Simulator},
author = {V. Sehgal and N. Sekaran},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85213320680&doi=10.4271%2f2024-28-0261&partnerID=40&md5=37a924cf9beda31f2c23b3a2cdf575d2},
doi = {10.4271/2024-28-0261},
isbn = {01487191 (ISSN)},
year = {2024},
date = {2024-01-01},
booktitle = {SAE Techni. Paper.},
publisher = {SAE International},
abstract = {To establish and validate new systems incorporated into next generation vehicles, it is important to understand actual scenarios which the autonomous vehicles will likely encounter. Consequently, to do this, it is important to run Field Operational Tests (FOT). FOT is undertaken with many vehicles and large acquisition areas ensuing the capability and suitability of a continuous function, thus guaranteeing the randomization of test conditions. FOT and Use case(a software testing technique designed to ensure that the system under test meets and exceeds the stakeholders' expectations) scenario recordings capture is very expensive, due to the amount of necessary material (vehicles, measurement equipment/objectives, headcount, data storage capacity/complexity, trained drivers/professionals) and all-time robust working vehicle setup is not always available, moreover mileage is directly proportional to time, along with that it cannot be scaled up due to physical limitations. During the early development phase, ground truth data is not available, and data that can be reused from other projects may not match 100% with current project requirements. All event scenarios/weather conditions cannot be ensured during recording capture, in such cases synthetic/virtual recording comes very handy which can accurately mimic real conditions on test bench and can very well address the before mentioned constraints. Car Learning to Act (CARLA) [1] is an autonomous open-source driving simulator, used for the development, training, and validation of autonomous driving systems is extended for generation of synthetic/virtual data/recordings, by integrating Generative Artificial Intelligence (Gen AI), particularly Generative Adversarial Networks (GANs) [2] and Retrieval Augmented Generation (RAG) [3] which are deep learning models. The process of creating synthetic data using vehicle models becomes more efficient and reliable as Gen AI can hold and reproduce much more data in scenario development than a developer or tester. A Large Language Model (LLM) [4] takes user input in the form of user prompts and generate scenarios that are used to produce a vast amount of high-quality, distinct, and realistic driving scenarios that closely resemble real-world driving data. Gen AI [5] empowers the user to generate not only dynamic environment conditions (such as different weather conditions and lighting conditions) but also dynamic elements like the behavior of other vehicles and pedestrians. Synthetic/Virtual recording [6] generated using Gen AI can be used to train and validate virtual vehicle models, FOT/Use case data which is used to indirectly prove real-world performance of functionality of tasks such as object detection, object recognition, image segmentation, and decision-making algorithms in autonomous vehicles. Augmenting LLM with CARLA involves training generative models on real-world driving data using RAG which allows the model to generate new, synthetic instances that resemble real-world conditions/scenarios. © 2024 SAE International. All Rights Reserved.},
keywords = {Access control, Air cushion vehicles, Associative storage, Augmented Reality, Automobile driver simulators, Automobile drivers, Automobile simulators, Automobile testing, Autonomous Vehicles, benchmarking, Computer testing, Condition, Continuous functions, Dynamic random access storage, Formal concept analysis, HDCP, Language Model, Luminescent devices, Network Security, Operational test, Operational use, Problem oriented languages, Randomisation, Real-world drivings, Sailing vessels, Ships, Test condition, UNIX, Vehicle modelling, Virtual addresses},
pubstate = {published},
tppubtype = {inproceedings}
}
Na, M.; Lee, J.
Generative AI-Enabled Energy-Efficient Mobile Augmented Reality in Multi-Access Edge Computing Journal Article
In: Applied Sciences (Switzerland), vol. 14, no. 18, 2024, ISSN: 20763417 (ISSN).
Abstract | Links | BibTeX | Tags: Artificial intelligence technologies, Augmented Reality, benchmarking, Computation offloading, Edge computing, Energy Efficient, Generative adversarial networks, Generative AI, Image enhancement, Mobile augmented reality, Mobile edge computing, Multi-access edge computing, Multiaccess, Quality of Service, Resolution process, super-resolution, Superresolution, Trade off
@article{na_generative_2024,
title = {Generative AI-Enabled Energy-Efficient Mobile Augmented Reality in Multi-Access Edge Computing},
author = {M. Na and J. Lee},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205236316&doi=10.3390%2fapp14188419&partnerID=40&md5=0aa1c42cb7343cfb55a9dc1e66494dc6},
doi = {10.3390/app14188419},
issn = {20763417 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {Applied Sciences (Switzerland)},
volume = {14},
number = {18},
abstract = {This paper proposes a novel offloading and super-resolution (SR) control scheme for energy-efficient mobile augmented reality (MAR) in multi-access edge computing (MEC) using SR as a promising generative artificial intelligence (GAI) technology. Specifically, SR can enhance low-resolution images into high-resolution versions using GAI technologies. This capability is particularly advantageous in MAR by lowering the bitrate required for network transmission. However, this SR process requires considerable computational resources and can introduce latency, potentially overloading the MEC server if there are numerous offload requests for MAR services. In this context, we conduct an empirical study to verify that the computational latency of SR increases with the upscaling level. Therefore, we demonstrate a trade-off between computational latency and improved service satisfaction when upscaling images for object detection, as it enhances the detection accuracy. From this perspective, determining whether to apply SR for MAR, while jointly controlling offloading decisions, is challenging. Consequently, to design energy-efficient MAR, we rigorously formulate analytical models for the energy consumption of a MAR device, the overall latency and the MAR satisfaction of service quality from the enforcement of the service accuracy, taking into account the SR process at the MEC server. Finally, we develop a theoretical framework that optimizes the computation offloading and SR control problem for MAR clients by jointly optimizing the offloading and SR decisions, considering their trade-off in MAR with MEC. Finally, the performance evaluation indicates that our proposed framework effectively supports MAR services by efficiently managing offloading and SR decisions, balancing trade-offs between energy consumption, latency, and service satisfaction compared to benchmarks. © 2024 by the authors.},
keywords = {Artificial intelligence technologies, Augmented Reality, benchmarking, Computation offloading, Edge computing, Energy Efficient, Generative adversarial networks, Generative AI, Image enhancement, Mobile augmented reality, Mobile edge computing, Multi-access edge computing, Multiaccess, Quality of Service, Resolution process, super-resolution, Superresolution, Trade off},
pubstate = {published},
tppubtype = {article}
}
2023
Vlasov, A. V.
GALA Inspired by Klimt's Art: Text-to-image Processing with Implementation in Interaction and Perception Studies: Library and Case Examples Journal Article
In: Annual Review of CyberTherapy and Telemedicine, vol. 21, pp. 200–205, 2023, ISSN: 15548716 (ISSN).
Abstract | Links | BibTeX | Tags: AIGC, applied research, art library, Article, Artificial intelligence, benchmarking, dataset, GALA, human, Human computer interaction, Image processing, Klimt, library, life satisfaction, neuropoem, Text-to-image, Virtual Reality, Wellbeing
@article{vlasov_gala_2023,
title = {GALA Inspired by Klimt's Art: Text-to-image Processing with Implementation in Interaction and Perception Studies: Library and Case Examples},
author = {A. V. Vlasov},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85182461798&partnerID=40&md5=0c3f5f4214a46db51f46f0092495eb2b},
issn = {15548716 (ISSN)},
year = {2023},
date = {2023-01-01},
journal = {Annual Review of CyberTherapy and Telemedicine},
volume = {21},
pages = {200–205},
abstract = {Objectives: (a) to develop a library with AI generated content (AIGC) based on а combinatorial scheme of prompting for interaction and perception research; (b) to show examples of AIGC implementation. The result is a public library for applied research in the cyber-psychological community (CYPSY). The Generative Art Library Abstractions (GALA) include images (Figures 1-2) based on the text-image model and inspired by the artwork of Gustav Klimt. They can be used for comparative analysis (benchmarking), end-to-end evaluation, and advanced design. This allows experimentation with complex human-computer interaction (HCI) architectures and visual communication systems, and provides creative design support for experimenting. Examples include: interactive perception of positively colored generative images; HCI dialogues using visual language; generated moods in a VR environment; brain-computer interface for HCI. Respectfully, these visualization resources are a valuable example of AIGC for next-generation R&D. Any suggestions from the CYPSY community are welcome. © 2023, Interactive Media Institute. All rights reserved.},
keywords = {AIGC, applied research, art library, Article, Artificial intelligence, benchmarking, dataset, GALA, human, Human computer interaction, Image processing, Klimt, library, life satisfaction, neuropoem, Text-to-image, Virtual Reality, Wellbeing},
pubstate = {published},
tppubtype = {article}
}