AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
Here you can find the complete list of our publications.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2024
Patel, P.; Goiri, Í.; Choukse, E.; Warrier, B.; Bianchini, R.; Zhang, C.; Mahalingam, N.
Characterizing Power Management Opportunities for LLMs in the Cloud Proceedings Article
In: Int Conf Archit Support Program Lang Oper Syst ASPLOS, pp. 207–222, Association for Computing Machinery, 2024, ISBN: 979-840070386-7 (ISBN).
Abstract | Links | BibTeX | Tags: Cloud, Cloud providers, Computational Linguistics, Computing power, Consumption patterns, Datacenter, datacenters, Electric power utilization, GPUs, Language Model, Large language model, large language models, Model inference, Power, Power management, Power oversubscription, Power usage, Profiling, Program processors, Virtual Reality
@inproceedings{patel_characterizing_2024,
title = {Characterizing Power Management Opportunities for LLMs in the Cloud},
author = {P. Patel and Í. Goiri and E. Choukse and B. Warrier and R. Bianchini and C. Zhang and N. Mahalingam},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85192199791&doi=10.1145%2f3620666.3651329&partnerID=40&md5=6102cbb096a789e297711420d4b8427a},
doi = {10.1145/3620666.3651329},
isbn = {979-840070386-7 (ISBN)},
year = {2024},
date = {2024-01-01},
booktitle = {Int Conf Archit Support Program Lang Oper Syst ASPLOS},
volume = {3},
pages = {207–222},
publisher = {Association for Computing Machinery},
abstract = {Recent innovation in large language models (LLMs), and their myriad use cases have rapidly driven up the compute demand for datacenter GPUs. Several cloud providers and other enterprises plan to substantially grow their datacenter capacity to support these new workloads. A key bottleneck resource in datacenters is power, which LLMs are quickly saturating due to their rapidly increasing model sizes. We extensively characterize the power consumption patterns of a variety of LLMs and their configurations. We identify the differences between the training and inference power consumption patterns. Based on our analysis, we claim that the average and peak power utilization in LLM inference clusters should not be very high. Our deductions align with data from production LLM clusters, revealing that inference workloads offer substantial headroom for power oversubscription. However, the stringent set of telemetry and controls that GPUs offer in a virtualized environment make it challenging to build a reliable and robust power management framework. We leverage the insights from our characterization to identify opportunities for better power management. As a detailed use case, we propose a new framework called POLCA, which enables power oversubscription in LLM inference clouds. POLCA is robust, reliable, and readily deployable. Using open-source models to replicate the power patterns observed in production, we simulate POLCA and demonstrate that we can deploy 30% more servers in existing clusters with minimal performance loss. © 2024 Copyright held by the owner/author(s).},
keywords = {Cloud, Cloud providers, Computational Linguistics, Computing power, Consumption patterns, Datacenter, datacenters, Electric power utilization, GPUs, Language Model, Large language model, large language models, Model inference, Power, Power management, Power oversubscription, Power usage, Profiling, Program processors, Virtual Reality},
pubstate = {published},
tppubtype = {inproceedings}
}
Recent innovation in large language models (LLMs), and their myriad use cases have rapidly driven up the compute demand for datacenter GPUs. Several cloud providers and other enterprises plan to substantially grow their datacenter capacity to support these new workloads. A key bottleneck resource in datacenters is power, which LLMs are quickly saturating due to their rapidly increasing model sizes. We extensively characterize the power consumption patterns of a variety of LLMs and their configurations. We identify the differences between the training and inference power consumption patterns. Based on our analysis, we claim that the average and peak power utilization in LLM inference clusters should not be very high. Our deductions align with data from production LLM clusters, revealing that inference workloads offer substantial headroom for power oversubscription. However, the stringent set of telemetry and controls that GPUs offer in a virtualized environment make it challenging to build a reliable and robust power management framework. We leverage the insights from our characterization to identify opportunities for better power management. As a detailed use case, we propose a new framework called POLCA, which enables power oversubscription in LLM inference clouds. POLCA is robust, reliable, and readily deployable. Using open-source models to replicate the power patterns observed in production, we simulate POLCA and demonstrate that we can deploy 30% more servers in existing clusters with minimal performance loss. © 2024 Copyright held by the owner/author(s).