GO BACK

Towards a device-independent deep learning approach for the automated segmentation of sonographic fetal brain structures

Abhi Lad, Adithya Narayan, Hari Shankar, Shefali Jain, Pooja Vyas, Divya Singh, Nivedita Hegde, Jagruthi Atada, Jens Thang, Saw Shier Nee, Arunkumar Govindarajan, Roopa PS, Muralidhar V Pai, Akhila Vasudeva, Prathima Radhakrishnan, and Sripad Krishna Devalla

Access to quality prenatal ultrasonography (USG) is limited by a number of well-trained fetal sonographers. By leveraging on deep learning (DL), we can assist even novice users in delivering standardized and quality prenatal USG examinations, necessary for the timely screening and specialists referrals in case of fetal anomalies. We propose a DL framework to segment 10 key fetal brain structures across 2 axial views necessary for the standardized USG examination. Despite training on images from only 1 center (2 USG devices), our DL model was able to generalize well even on unseen devices from other centers. The use of domain-specific data augmentation significantly improved the segmentation performance across test sets and across other benchmarking DL models as well. We believe, our work opens doors for the development of device-independent and robust models, a necessity for seamless clinical translation and deployment.