paint-brush
Deep Neural Networks to Detect and Quantify Lymphoma Lesions: Conclusion and Referencesby@reinforcement

Deep Neural Networks to Detect and Quantify Lymphoma Lesions: Conclusion and References

Too Long; Didn't Read

This study performs comprehensive evaluation of four neural network architectures for lymphoma lesion segmentation from PET/CT images.
featured image - Deep Neural Networks to Detect and Quantify Lymphoma Lesions: Conclusion and References
Reinforcement Technology Advancements HackerNoon profile picture

Authors:

(1) Shadab Ahamed, University of British Columbia, Vancouver, BC, Canada, BC Cancer Research Institute, Vancouver, BC, Canada. He was also a Mitacs Accelerate Fellow (May 2022 - April 2023) with Microsoft AI for Good Lab, Redmond, WA, USA (e-mail: [email protected]);

(2) Yixi Xu, Microsoft AI for Good Lab, Redmond, WA, USA;

(3) Claire Gowdy, BC Children’s Hospital, Vancouver, BC, Canada;

(4) Joo H. O, St. Mary’s Hospital, Seoul, Republic of Korea;

(5) Ingrid Bloise, BC Cancer, Vancouver, BC, Canada;

(6) Don Wilson, BC Cancer, Vancouver, BC, Canada;

(7) Patrick Martineau, BC Cancer, Vancouver, BC, Canada;

(8) Franc¸ois Benard, BC Cancer, Vancouver, BC, Canada;

(9) Fereshteh Yousefirizi, BC Cancer Research Institute, Vancouver, BC, Canada;

(10) Rahul Dodhia, Microsoft AI for Good Lab, Redmond, WA, USA;

(11) Juan M. Lavista, Microsoft AI for Good Lab, Redmond, WA, USA;

(12) William B. Weeks, Microsoft AI for Good Lab, Redmond, WA, USA;

(13) Carlos F. Uribe, BC Cancer Research Institute, Vancouver, BC, Canada, and University of British Columbia, Vancouver, BC, Canada;

(14) Arman Rahmim, BC Cancer Research Institute, Vancouver, BC, Canada, and University of British Columbia, Vancouver, BC, Canada.

VI. CONCLUSION

In this study, we assessed various neural network architectures for automating lymphoma lesion segmentation in PET/CT images across multiple datasets. We examined the reproducibility of lesion measures, revealing differences among networks, highlighting their suitability for specific clinical uses. Additionally, we introduced three lesion detection criteria to assess network performance at a per-lesion level, emphasizing their clinical relevance. Lastly, we discussed challenges related to ground truth consistency and stressed the importance of having well-defined protocol for segmentation. This work provides valuable insights into deep learning’s potentials and limitations in lymphoma lesion segmentation and emphasizes the need for standardized annotation practices to enhance research validity and clinical applications.

REFERENCES

[1] S. F. Barrington et al. “FDG PET for therapy monitoring in Hodgkin and non-Hodgkin lymphomas”. In: European Journal of Nuclear Medicine and Molecular Imaging 44.1 (Aug. 2017), pp. 97–110. ISSN: 1619- 7089.


[2] K. Okuyucu et al. “Prognosis estimation under the light of metabolic tumor parameters on initial FDG-PET/CT in patients with primary extranodal lymphoma”. en. In: Radiol. Oncol. 50.4 (Dec. 2016), pp. 360–369.


[3] N. Wu et al. “Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening”. In: IEEE Transactions on Medical Imaging 39.4 (2020), pp. 1184–1194.


[4] C. Yuan et al. “Diffuse large B-cell lymphoma segmentation in PET-CT images via hybrid learning for feature fusion”. In: Medical Physics 48.7 (2021), pp. 3665– 3678.


[5] H. Hu et al. “Lymphoma Segmentation in PET Images Based on Multi-view and Conv3D Fusion Strategy”. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). 2020, pp. 1197–1200.


[6] H. Li et al. “DenseX-Net: An End-to-End Model for Lymphoma Segmentation in Whole-Body PET/CT Images”. In: IEEE Access 8 (2020), pp. 8004–8018.


[7] L. Liu et al. “Improved Multi-modal Patch Based Lymphoma Segmentation with Negative Sample Augmentation and Label Guidance on PET/CT Scans”. In: Multiscale Multimodal Medical Imaging. Ed. by X. Li et al. Cham: Springer Nature Switzerland, 2022, pp. 121–129. ISBN: 978-3-031-18814-5.


[8] C. S. Constantino et al. “Evaluation of Semiautomatic and Deep Learning–Based Fully Automatic Segmentation Methods on [18F]FDG PET/CT Images from Patients with Lymphoma: Influence on Tumor Characterization”. In: Journal of Digital Imaging 36.4 (Aug. 2023), pp. 1864–1876. ISSN: 1618-727X.


[9] A. J. Weisman et al. “Comparison of 11 automated PET segmentation methods in lymphoma”. English. In: Physics in medicine & biology 65.23 (2020), pp. 235019–235019.


[10] A. J. Weisman et al. “Convolutional Neural Networks for Automated PET/CT Detection of Diseased Lymph Node Burden in Patients with Lymphoma”. In: Radiology: Artificial Intelligence 2.5 (2020), e200016.


[11] C. Jiang et al. “Deep learning–based tumour segmentation and total metabolic tumour volume prediction in the prognosis of diffuse large B-cell lymphoma patients in 3D FDG-PET images”. In: European Radiology 32.7 (July 2022), pp. 4801–4812. ISSN: 1432-1084.


[12] P. Blanc-Durand et al. “Fully automatic segmentation of diffuse large B cell lymphoma lesions on 3D FDGPET/CT for total metabolic tumour volume prediction using a convolutional neural network.” In: European Journal of Nuclear Medicine and Molecular Imaging 48.5 (May 2021), pp. 1362–1370. ISSN: 1619-7089.


[13] S. Ahamed et al. “A U-Net Convolutional Neural Network with Multiclass Dice Loss for Automated Segmentation of Tumors and Lymph Nodes from Head and Neck Cancer PET/CT Images”. In: Head and Neck Tumor Segmentation and Outcome Prediction. Ed. by V. Andrearczyk et al. Cham: Springer Nature Switzerland, 2023, pp. 94–106. ISBN: 978-3-031-27420-6.


[14] S. Gatidis et al. “A whole-body FDG-PET/CT Dataset with manually annotated Tumor Lesions”. In: Scientific Data 9.1 (Oct. 2022), p. 601. ISSN: 2052-4463.


[15] M. Pop et al. “Left-Ventricle Quantification Using Residual U-Net”. English. In: vol. 11395. Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. Switzerland: Springer International Publishing AG, 2019, pp. 371–380. ISBN: 0302-9743.


[16] A. Myronenko. “3D MRI Brain Tumor Segmentation Using Autoencoder Regularization”. English. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. Cham: Springer International Publishing, 2019, pp. 311–320. ISBN: 0302-9743.


[17] F. Isensee et al. “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation”. In: Nature Methods 18.2 (Dec. 2020), pp. 203–211.


[18] A. Hatamizadeh et al. “Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images”. English. In: (2022).


[19] M. J. Cardoso et al. MONAI: An open-source framework for deep learning in healthcare. 2022. arXiv: 2211. 02701 [cs.LG].


[20] S. Ahamed et al. “Towards enhanced lesion segmentation using a 3D neural network trained on multiresolution cropped patches of lymphoma PET images”. In: Journal of Nuclear Medicine 64.supplement 1 (2023), P1360–P1360. ISSN: 0161-5505.


[21] J. L. Fleiss. “Measuring nominal scale agreement among many raters”. English. In: Psychological bulletin 76.5 (1971), pp. 378–382.


[22] A. K. Jha et al. “Nuclear Medicine and artificial intelligence: Best practices for evaluation (the RELAINCE guidelines)”. en. In: J. Nucl. Med. 63.9 (Sept. 2022), pp. 1288–1299.


[23] N. Hasani et al. “Artificial Intelligence in Lymphoma PET Imaging: A Scoping Review (Current Trends and Future Directions)”. In: PET clinics 17.1 (Jan. 1), pp. 145–174. ISSN: 1556-8598.


[24] S. K. Warfield et al. “Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation”. en. In: IEEE Trans. Med. Imaging 23.7 (July 2004), pp. 903–921.


This paper is available on arxiv under CC 4.0 license.