AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Zeng, S. -Y.; Liang, T. -Y.
PartConverter: A Part-Oriented Transformation Framework for Point Clouds Journal Article
In: IET Image Processing, vol. 19, no. 1, 2025, ISSN: 17519659 (ISSN); 17519667 (ISSN), (Publisher: John Wiley and Sons Inc).
Abstract | Links | BibTeX | Tags: 3D modeling, 3D models, 3d-modeling, Adversarial networks, attention mechanism, Attention mechanisms, Auto encoders, Cloud transformations, Generative Adversarial Network, Part assembler, Part-oriented, Point cloud transformation, Point-clouds
@article{zeng_partconverter_2025,
title = {PartConverter: A Part-Oriented Transformation Framework for Point Clouds},
author = {S. -Y. Zeng and T. -Y. Liang},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105005775417&doi=10.1049%2Fipr2.70104&partnerID=40&md5=d1eccf7d6b58a93978c55e8f404be38b},
doi = {10.1049/ipr2.70104},
issn = {17519659 (ISSN); 17519667 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IET Image Processing},
volume = {19},
number = {1},
abstract = {With generative AI technologies advancing rapidly, the capabilities for 3D model generation and transformation are expanding across industries like manufacturing, healthcare, and virtual reality. However, existing methods based on generative adversarial networks (GANs), autoencoders, or transformers still have notable limitations. They primarily generate entire objects without providing flexibility for independent part transformation or precise control over model components. These constraints pose challenges for applications requiring complex object manipulation and fine-grained adjustments. To overcome these limitations, we propose PartConverter, a novel part-oriented point cloud transformation framework emphasizing flexibility and precision in 3D model transformations. PartConverter leverages attention mechanisms and autoencoders to capture crucial details within each part while modeling the relationships between components, thereby enabling highly customizable, part-wise transformations that maintain overall consistency. Additionally, our part assembler ensures that transformed parts align coherently, resulting in a consistent and realistic final 3D shape. This framework significantly enhances control over detailed part modeling, increasing the flexibility and efficiency of 3D model transformation workflows. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: John Wiley and Sons Inc},
keywords = {3D modeling, 3D models, 3d-modeling, Adversarial networks, attention mechanism, Attention mechanisms, Auto encoders, Cloud transformations, Generative Adversarial Network, Part assembler, Part-oriented, Point cloud transformation, Point-clouds},
pubstate = {published},
tppubtype = {article}
}
Oh, S.; Jung, M.; Kim, T.
EnvMat: A Network for Simultaneous Generation of PBR Maps and Environment Maps from a Single Image Journal Article
In: Electronics (Switzerland), vol. 14, no. 13, 2025, ISSN: 20799292 (ISSN), (Publisher: Multidisciplinary Digital Publishing Institute (MDPI)).
Abstract | Links | BibTeX | Tags: 3D graphics, Auto encoders, Cameras, Diffusion, Diffusion Model, Environment maps, generative artificial intelligence, Image understanding, Latent diffusion model, latent diffusion models, Metaverse, Metaverses, Neural Networks, Physically based rendering, physically based rendering (PBR), Rendering (computer graphics), Tellurium compounds, Three dimensional computer graphics, Variational Autoencoder, Variational Autoencoders (VAEs), Variational techniques, Virtual Reality, Visualization
@article{oh_envmat_2025,
title = {EnvMat: A Network for Simultaneous Generation of PBR Maps and Environment Maps from a Single Image},
author = {S. Oh and M. Jung and T. Kim},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105010306182&doi=10.3390%2Felectronics14132554&partnerID=40&md5=a6e24d71cb6f1e632ee2415b99f68c0e},
doi = {10.3390/electronics14132554},
issn = {20799292 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {Electronics (Switzerland)},
volume = {14},
number = {13},
abstract = {Generative neural networks have expanded from text and image generation to creating realistic 3D graphics, which are critical for immersive virtual environments. Physically Based Rendering (PBR)—crucial for realistic 3D graphics—depends on PBR maps, environment (env) maps for lighting, and camera viewpoints. Current research mainly generates PBR maps separately, often using fixed env maps and camera poses. This limitation reduces visual consistency and immersion in 3D spaces. Addressing this, we propose EnvMat, a diffusion-based model that simultaneously generates PBR and env maps. EnvMat uses two Variational Autoencoders (VAEs) for map reconstruction and a Latent Diffusion UNet. Experimental results show that EnvMat surpasses the existing methods in preserving visual accuracy, as validated through metrics like L-PIPS, MS-SSIM, and CIEDE2000. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Multidisciplinary Digital Publishing Institute (MDPI)},
keywords = {3D graphics, Auto encoders, Cameras, Diffusion, Diffusion Model, Environment maps, generative artificial intelligence, Image understanding, Latent diffusion model, latent diffusion models, Metaverse, Metaverses, Neural Networks, Physically based rendering, physically based rendering (PBR), Rendering (computer graphics), Tellurium compounds, Three dimensional computer graphics, Variational Autoencoder, Variational Autoencoders (VAEs), Variational techniques, Virtual Reality, Visualization},
pubstate = {published},
tppubtype = {article}
}