AHCI RESEARCH GROUP
Publications
Papers published in international journals,
proceedings of conferences, workshops and books.
OUR RESEARCH
Scientific Publications
How to
You can use the tag cloud to select only the papers dealing with specific research topics.
You can expand the Abstract, Links and BibTex record for each paper.
2025
Salinas, C. S.; Magudia, K.; Sangal, A.; Ren, L.; Segars, W. P.
In-silico CT simulations of deep learning generated heterogeneous phantoms Journal Article
In: Biomedical Physics and Engineering Express, vol. 11, no. 4, 2025, ISSN: 20571976 (ISSN), (Publisher: Institute of Physics).
Abstract | Links | BibTeX | Tags: adult, algorithm, Algorithms, anatomical concepts, anatomical location, anatomical variation, Article, Biological organs, bladder, Bone, bone marrow, CGAN, colon, comparative study, computer assisted tomography, Computer graphics, computer model, Computer Simulation, Computer-Assisted, Computerized tomography, CT organ texture, CT organ textures, CT scanners, CT synthesis, CT-scan, Deep learning, fluorodeoxyglucose f 18, Generative Adversarial Network, Generative AI, histogram, human, human tissue, Humans, III-V semiconductors, image analysis, Image processing, Image segmentation, Image texture, Imaging, imaging phantom, intra-abdominal fat, kidney blood vessel, Learning systems, liver, lung, major clinical study, male, mean absolute error, Medical Imaging, neoplasm, Phantoms, procedures, prostate muscle, radiological parameters, signal noise ratio, Signal to noise ratio, Signal-To-Noise Ratio, simulation, Simulation platform, small intestine, Statistical tests, stomach, structural similarity index, subcutaneous fat, Textures, three dimensional double u net conditional generative adversarial network, Three-Dimensional, three-dimensional imaging, Tomography, Virtual CT scanner, Virtual Reality, Virtual trial, virtual trials, whole body CT, X-Ray Computed, x-ray computed tomography
@article{salinas_-silico_2025,
title = {In-silico CT simulations of deep learning generated heterogeneous phantoms},
author = {C. S. Salinas and K. Magudia and A. Sangal and L. Ren and W. P. Segars},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105010297226&doi=10.1088%2F2057-1976%2Fade9c9&partnerID=40&md5=47f211fd93f80e407dcd7e4c490976c2},
doi = {10.1088/2057-1976/ade9c9},
issn = {20571976 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {Biomedical Physics and Engineering Express},
volume = {11},
number = {4},
abstract = {Current virtual imaging phantoms primarily emphasize geometric accuracy of anatomical structures. However, to enhance realism, it is also important to incorporate intra-organ detail. Because biological tissues are heterogeneous in composition, virtual phantoms should reflect this by including realistic intra-organ texture and material variation. We propose training two 3D Double U-Net conditional generative adversarial networks (3D DUC-GAN) to generate sixteen unique textures that encompass organs found within the torso. The model was trained on 378 CT image-segmentation pairs taken from a publicly available dataset with 18 additional pairs reserved for testing. Textured phantoms were generated and imaged using DukeSim, a virtual CT simulation platform. Results showed that the deep learning model was able to synthesize realistic heterogeneous phantoms from a set of homogeneous phantoms. These phantoms were compared with original CT scans and had a mean absolute difference of 46.15 ± 1.06 HU. The structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) were 0.86 ± 0.004 and 28.62 ± 0.14, respectively. The maximum mean discrepancy between the generated and actual distribution was 0.0016. These metrics marked an improvement of 27%, 5.9%, 6.2%, and 28% respectively, compared to current homogeneous texture methods. The generated phantoms that underwent a virtual CT scan had a closer visual resemblance to the true CT scan compared to the previous method. The resulting heterogeneous phantoms offer a significant step toward more realistic in silico trials, enabling enhanced simulation of imaging procedures with greater fidelity to true anatomical variation. © 2025 Elsevier B.V., All rights reserved.},
note = {Publisher: Institute of Physics},
keywords = {adult, algorithm, Algorithms, anatomical concepts, anatomical location, anatomical variation, Article, Biological organs, bladder, Bone, bone marrow, CGAN, colon, comparative study, computer assisted tomography, Computer graphics, computer model, Computer Simulation, Computer-Assisted, Computerized tomography, CT organ texture, CT organ textures, CT scanners, CT synthesis, CT-scan, Deep learning, fluorodeoxyglucose f 18, Generative Adversarial Network, Generative AI, histogram, human, human tissue, Humans, III-V semiconductors, image analysis, Image processing, Image segmentation, Image texture, Imaging, imaging phantom, intra-abdominal fat, kidney blood vessel, Learning systems, liver, lung, major clinical study, male, mean absolute error, Medical Imaging, neoplasm, Phantoms, procedures, prostate muscle, radiological parameters, signal noise ratio, Signal to noise ratio, Signal-To-Noise Ratio, simulation, Simulation platform, small intestine, Statistical tests, stomach, structural similarity index, subcutaneous fat, Textures, three dimensional double u net conditional generative adversarial network, Three-Dimensional, three-dimensional imaging, Tomography, Virtual CT scanner, Virtual Reality, Virtual trial, virtual trials, whole body CT, X-Ray Computed, x-ray computed tomography},
pubstate = {published},
tppubtype = {article}
}
2024
Chen, X.; Gao, W.; Chu, Y.; Song, Y.
Enhancing interaction in virtual-real architectural environments: A comparative analysis of generative AI-driven reality approaches Journal Article
In: Building and Environment, vol. 266, 2024, ISSN: 03601323 (ISSN), (Publisher: Elsevier Ltd).
Abstract | Links | BibTeX | Tags: Architectural design, Architectural environment, Architectural environments, Artificial intelligence, cluster analysis, Comparative analyzes, comparative study, Computational design, Generative adversarial networks, Generative AI, generative artificial intelligence, Mixed reality, Real time interactions, Real-space, Unity3d, Virtual addresses, Virtual environments, Virtual Reality, Virtual spaces, Work-flows
@article{chen_enhancing_2024,
title = {Enhancing interaction in virtual-real architectural environments: A comparative analysis of generative AI-driven reality approaches},
author = {X. Chen and W. Gao and Y. Chu and Y. Song},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205298350&doi=10.1016%2Fj.buildenv.2024.112113&partnerID=40&md5=58a1160f986c827b4063f095fee77d6e},
doi = {10.1016/j.buildenv.2024.112113},
issn = {03601323 (ISSN)},
year = {2024},
date = {2024-01-01},
journal = {Building and Environment},
volume = {266},
abstract = {The architectural environment is expanding into digital, virtual, and informational dimensions, introducing challenges in virtual-real space interaction. Traditional design methods struggle with real-time interaction, integration with existing workflows, and rapid space modification. To address these issues, we present a generative design method that enables symbiotic interaction between virtual and real spaces using Mixed Reality (MR) and Generative Artificial Intelligence (AI) technologies. We developed two approaches: one using the Rhino modeling platform and the other based on the Unity3D game engine, tailored to different application needs. User experience testing in exhibition, leisure, and residential spaces evaluated our method's effectiveness. Results showed significant improvements in design flexibility, interactive efficiency, and user satisfaction. In the exhibition scenario, the Unity3D-based method excelled in rapid design modifications and immersive experiences. Questionnaire data indicated that MR offers good visual comfort and higher immersion than VR, effectively supporting architects in interface and scale design. Clustering analysis of participants' position and gaze data revealed diverse behavioral patterns in the virtual-physical exhibition space, providing insights for optimizing spatial layouts and interaction methods. Our findings suggest that the generative AI-driven MR method simplifies traditional design processes by enabling real-time modification and interaction with spatial interfaces through simple verbal and motion interactions. This approach streamlines workflows by reducing steps like measuring, modeling, and rendering, while enhancing user engagement and creativity. Overall, this method offers new possibilities for experiential exhibition and architectural design, contributing to future environments where virtual and real spaces coexist seamlessly. © 2024 Elsevier B.V., All rights reserved.},
note = {Publisher: Elsevier Ltd},
keywords = {Architectural design, Architectural environment, Architectural environments, Artificial intelligence, cluster analysis, Comparative analyzes, comparative study, Computational design, Generative adversarial networks, Generative AI, generative artificial intelligence, Mixed reality, Real time interactions, Real-space, Unity3d, Virtual addresses, Virtual environments, Virtual Reality, Virtual spaces, Work-flows},
pubstate = {published},
tppubtype = {article}
}