Table of Links
IV. EXPERIMENTS AND RESULTS
A. Dataset
In this study, the CT data is sourced from the AbdomenCT1K dataset [16], while the ultrasound data is obtained from the Kaggle US simulation & segmentation dataset [14]. Both datasets contain scans from the abdominal region. The CT dataset is annotated with four anatomical structures: liver, kidney, spleen, and pancreas. Conversely, the ultrasound dataset includes annotations for eight anatomical structures: liver, kidney, spleen, pancreas, vessels, adrenals, gallbladder, and bones. Therefore, for this research, we focus on the overlapping structures between the two datasets as the anatomical structures of interest. The specific organs and their corresponding mask colors are detailed in Table I.
The Abdomen-1K dataset provides more than 1000 CT scans, and the data is provided in 3D format. Firstly, we randomly select 200 CT scans, and for each CT scan, we randomly sampled 10 transverse plane slices. For a more uniform image shape, we applied a fan shape mask to the CT images to mimic the outline of convex ultrasound images.
B. Network Implementation and Training
C. Qualitative Results
Fig. 5 and 6 present examples of the translation results from CT to ultrasound. These visual comparisons demonstrate that the S-CycleGAN can not only mimic the ultrasound style but also preserve critical anatomical features compared with Fig. 3. The synthetic images closely resemble
real ultrasound scans in terms of texture and shape, suggesting a high level of detail preservation.
Authors:
(1) Yuhan Song, School of Information Science, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa 923-1292, Japan ([email protected]);
(2) Nak Young Chong, School of Information Science, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa 923-1292, Japan ([email protected]).
This paper is