AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2024
Otoum, Y.; Gottimukkala, N.; Kumar, N.; Nayak, A.
Machine Learning in Metaverse Security: Current Solutions and Future Challenges Journal Article
In: ACM Computing Surveys, vol. 56, no. 8, 2024, ISSN: 03600300 (ISSN).
Abstract | Links | BibTeX | Tags: 'current, Block-chain, Blockchain, digital twin, E-Learning, Extended reality, Future challenges, Generative AI, machine learning, Machine-learning, Metaverse Security, Metaverses, Security and privacy, Spatio-temporal dynamics, Sustainable development
@article{otoum_machine_2024,
title = {Machine Learning in Metaverse Security: Current Solutions and Future Challenges},
author = {Y. Otoum and N. Gottimukkala and N. Kumar and A. Nayak},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85193466017&doi=10.1145%2f3654663&partnerID=40&md5=b35485c5f2e943ec105ea11a80712cbe},
doi = {10.1145/3654663},
issn = {03600300 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {ACM Computing Surveys},
volume = {56},
number = {8},
abstract = {The Metaverse, positioned as the next frontier of the Internet, has the ambition to forge a virtual shared realm characterized by immersion, hyper-spatiotemporal dynamics, and self-sustainability. Recent technological strides in AI, Extended Reality, 6G, and blockchain propel the Metaverse closer to realization, gradually transforming it from science fiction into an imminent reality. Nevertheless, the extensive deployment of the Metaverse faces substantial obstacles, primarily stemming from its potential to infringe on privacy and be susceptible to security breaches, whether inherent in its underlying technologies or arising from the evolving digital landscape. Metaverse security provisioning is poised to confront various foundational challenges owing to its distinctive attributes, encompassing immersive realism, hyper-spatiotemporally, sustainability, and heterogeneity. This article undertakes a comprehensive study of the security and privacy challenges facing the Metaverse, leveraging machine learning models for this purpose. In particular, our focus centers on an innovative distributed Metaverse architecture characterized by interactions across 3D worlds. Subsequently, we conduct a thorough review of the existing cutting-edge measures designed for Metaverse systems while also delving into the discourse surrounding security and privacy threats. As we contemplate the future of Metaverse systems, we outline directions for open research pursuits in this evolving landscape. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.},
keywords = {'current, Block-chain, Blockchain, digital twin, E-Learning, Extended reality, Future challenges, Generative AI, machine learning, Machine-learning, Metaverse Security, Metaverses, Security and privacy, Spatio-temporal dynamics, Sustainable development},
pubstate = {published},
tppubtype = {article}
}
Kang, Z.; Liu, Y.; Zheng, J.; Sun, Z.
Revealing the Difficulty in Jailbreak Defense on Language Models for Metaverse Proceedings Article
In: Q., Gong; X., He (Ed.): SocialMeta - Proc. Int. Workshop Soc. Metaverse Comput., Sens. Netw., Part: ACM SenSys, pp. 31–37, Association for Computing Machinery, Inc, 2024, ISBN: 979-840071299-9 (ISBN).
Abstract | Links | BibTeX | Tags: % reductions, Attack strategies, Computer simulation languages, Defense, Digital elevation model, Guard rails, Jailbreak, Language Model, Large language model, Metaverse Security, Metaverses, Natural languages, Performance, Virtual Reality
@inproceedings{kang_revealing_2024,
title = {Revealing the Difficulty in Jailbreak Defense on Language Models for Metaverse},
author = {Z. Kang and Y. Liu and J. Zheng and Z. Sun},
editor = {Gong Q. and He X.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85212189363&doi=10.1145%2f3698387.3699998&partnerID=40&md5=673326728c3db35ffbbaf807eb7f003c},
doi = {10.1145/3698387.3699998},
isbn = {979-840071299-9 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {SocialMeta - Proc. Int. Workshop Soc. Metaverse Comput., Sens. Netw., Part: ACM SenSys},
pages = {31–37},
publisher = {Association for Computing Machinery, Inc},
abstract = {Large language models (LLMs) have demonstrated exceptional capabilities in natural language processing tasks, fueling innovations in emerging areas such as the metaverse. These models enable dynamic virtual communities, enhancing user interactions and revolutionizing industries. However, their increasing deployment exposes vulnerabilities to jailbreak attacks, where adversaries can manipulate LLM-driven systems to generate harmful content. While various defense mechanisms have been proposed, their efficacy against diverse jailbreak techniques remains unclear. This paper addresses this gap by evaluating the performance of three popular defense methods (Backtranslation, Self-reminder, and Paraphrase) against different jailbreak attack strategies (GCG, BEAST, and Deepinception), while also utilizing three distinct models. Our findings reveal that while defenses are highly effective against optimization-based jailbreak attacks and reduce the attack success rate by 79% on average, they struggle in defending against attacks that alter attack motivations. Additionally, methods relying on self-reminding perform better when integrated with models featuring robust safety guardrails. For instance, Llama2-7b shows a 100% reduction in Attack Success Rate, while Vicuna-7b and Mistral-7b, lacking safety alignment, exhibit a lower average reduction of 65.8%. This study highlights the challenges in developing universal defense solutions for securing LLMs in dynamic environments like the metaverse. Furthermore, our study highlights that the three distinct models utilized demonstrate varying initial defense performance against different jailbreak attack strategies, underscoring the complexity of effectively securing LLMs. © 2024 Copyright held by the owner/author(s).},
keywords = {% reductions, Attack strategies, Computer simulation languages, Defense, Digital elevation model, Guard rails, Jailbreak, Language Model, Large language model, Metaverse Security, Metaverses, Natural languages, Performance, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}