Preserving Anatomical Details: Qualitative Assessment of S-CycleGAN for Ultrasound Synthesis

Written by tomography | Published 2025/10/02
Tech Story Tags: deep-learning | s-cyclegan | ct-to-ultrasound | medical-image-synthesis | ai-in-healthcare | healthcare-tech | health-tech | ct-scans

TLDRThis article details the experimental setup and qualitative results of the S-CycleGAN model for CT-to-Ultrasound image translation. via the TL;DR App

Table of Links

Abstract and 1. Introduction

II. Related Work

III. Methodology

IV. Experiments and Results

V. Conclusion and References

IV. EXPERIMENTS AND RESULTS

A. Dataset

In this study, the CT data is sourced from the AbdomenCT1K dataset [16], while the ultrasound data is obtained from the Kaggle US simulation & segmentation dataset [14]. Both datasets contain scans from the abdominal region. The CT dataset is annotated with four anatomical structures: liver, kidney, spleen, and pancreas. Conversely, the ultrasound dataset includes annotations for eight anatomical structures: liver, kidney, spleen, pancreas, vessels, adrenals, gallbladder, and bones. Therefore, for this research, we focus on the overlapping structures between the two datasets as the anatomical structures of interest. The specific organs and their corresponding mask colors are detailed in Table I.

The Abdomen-1K dataset provides more than 1000 CT scans, and the data is provided in 3D format. Firstly, we randomly select 200 CT scans, and for each CT scan, we randomly sampled 10 transverse plane slices. For a more uniform image shape, we applied a fan shape mask to the CT images to mimic the outline of convex ultrasound images.

B. Network Implementation and Training

C. Qualitative Results

Fig. 5 and 6 present examples of the translation results from CT to ultrasound. These visual comparisons demonstrate that the S-CycleGAN can not only mimic the ultrasound style but also preserve critical anatomical features compared with Fig. 3. The synthetic images closely resemble

real ultrasound scans in terms of texture and shape, suggesting a high level of detail preservation.

Authors:

(1) Yuhan Song, School of Information Science, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa 923-1292, Japan ([email protected]);

(2) Nak Young Chong, School of Information Science, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa 923-1292, Japan ([email protected]).


This paper is available on arxiv under ATTRIBUTION-NONCOMMERCIAL-NODERIVS 4.0 INTERNATIONAL license.


Written by tomography | Tomography
Published by HackerNoon on 2025/10/02