Accurate and robust segmentation of cerebral distal small arteries by DVNet with dual contextual path and vascular attention enhancement
Introduction
Alterations in cerebrovascular structures serve as important markers for numerous brain-related diseases (1,2). Cerebrovascular segmentation from vascular images can improve the diagnosis of cerebrovascular diseases, the quantitative evaluation of vascular lesions, and surgical planning (3). Recently, growing interest in developing image biomarkers for brain aging and neurodegenerative diseases based on quantitative features of cerebral vessels has underscored the importance of accurate and rapid segmentation of cerebral vasculature from magnetic resonance (MR) vascular images (4-6). However, developing an automatic segmentation algorithm to accurately delineate cerebral vascular trees from magnetic resonance angiography (MRA) data remains challenging. Due to the complex geometry of the human brain, scanning artifacts and limitations, and the wide range of intensities, densities, and diameters of small blood vessels (≤1 mm), cerebrovascular segmentation algorithms described in a previous study have constrained accurate segmentation of distal small arteries (7). Furthermore, manual or semiautomatic segmentation methods that involve human-operator interactions are known to be time-consuming, prone to errors, and susceptible to interobserver variability.
Recently, deep convolutional neural network (CNN)-based deep learning (DL) methods have achieved success in cerebrovascular segmentation tasks, and enhancing cerebrovascular features can significantly improve segmentation performance (8,9). Previous studies have demonstrated that multiscale training on large datasets to learn cerebrovascular features fully (10) and enhancing the edge features of cerebrovascular structures (11) can enhance vascular features in CNN-based segmentation models. Guo et al. (8) proposed an approach to train a cross-modality cerebrovascular segmentation network based on paired data from source and target domains, which is effective for cross-modality cerebrovascular segmentation and achieves state-of-the-art performance. Nevertheless, these methodologies fail to consider the intricate details of the cerebrovascular geometric structure or the complexities inherent in the cerebrovascular topological structure. Many studies have noted inadequate analysis of recent methods in anomaly detection and dilated convolutions (12-15), and only some advanced DL networks can solve the complex structure or background problems of images (16,17).
In this study, we designed a novel cerebrovascular segmentation method with enhanced geometric and topological features. First, we proposed a vessel-specific method [deep vascular network (DVNet)] based on a dual contextual path (DCP) and a vascular attention enhancement module (VAEM) for cerebrovascular segmentation. We designed a DCP for the complex topology of cerebrovascular vessels to capture both high- and low-resolution patches, expanding the receptive field during training to better retain the cerebrovascular topology information and ultimately enhancing the continuity of the segmented cerebrovascular structures. This dual encoder-decoder method has also been used in medical image segmentation problems (18). In addition, to enhance the precision of segmenting cerebral distal small arteries, we designed a VAEM to learn fine cerebrovascular features better for the fine geometry of cerebrovascular vessels. We hypothesized that DVNet with the DCP and vascular attention enhancement proposed in our study could achieve good accuracy for the segmentation of cerebral distal small arteries by maintaining topological information as completely as possible and enhancing cerebrovascular geometry.
Methods
Data and image preprocessing
In this study, MRA images were derived from the publicly available Database of MR Brain Images of Healthy Volunteers (MIDAS; https://data.kitware.com/#collection/591086ee8d777f16d01e0724/folder/58a372e38d777f0721a64dc6) (training and test) and Nanjing First Hospital (external validation) datasets. This retrospective study was approved by the Ethics Committee of Nanjing Medical University (No. 2019-664) and was conducted in accordance with the Declaration of Helsinki (as revised in 2013). The requirement for individual consent was waived by the Ethics Committee due to the retrospective nature of the analysis. The MIDAS dataset contains 100 MRA image pairs from healthy patients acquired by a Siemens Allegra head-only 3T MR system (Siemens Healthineers, Erlangen, Germany). The voxel spacing for the MRA images was 0.5 mm × 0.5 mm × 0.8 mm, with a volume of 448×448×128 voxels. The MIDAS consists of two different subsets: MIDAS-I and MIDAS-II. MIDAS-I is the first subset of MIDAS and includes 20 MRA images with manual annotation. The regional growth macro module implemented by MeVisLab (19) (Mevis Medical Solutions, Bremen, Germany) was first used to prelabel blood vessels through the threshold regional growth algorithm (20). This annotation process was performed strictly by three radiologists with more than 3 years of clinical experience. Then, a senior imaging expert with 9 years of experience performed annotation quality control. MIDAS-II is the second subset of MIDAS and includes 42 MRA images with intracranial vasculature (centerline + radius), which are extracted from MRA images by Aylward and Bullitt (21). A total of 42 volume annotations were generated by the open-source Insight Toolkit (ITK) (22). In the experiment, the training and testing images were resized to make them isotropic with a voxel size of 0.5 mm with a trilinear interpolation for a final size of 448×448×192. During training, patches with a size of 128×128×128 were randomly cropped from the volume, followed by a 0–90 degree random rotation for data augmentation. To assess the external segmentation validity, a validation set consisting of 100 patients with intracranial artery stenosis and 100 patients with a normal intracranial artery from Nanjing First Hospital was utilized. Patients in the validation set were scanned on a 3T MR system (Ingenia, Philips Healthcare, Chicago, IL, USA; Magnetom Prisma, Siemens Healthineers). A detailed flowchart of the study is provided in Figure 1.
![Click on image to zoom](http://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/133484/public/133484-PB4-5181-R1.jpg/w300)
DCP of cerebrovascular segmentation
The DCP can capture both the high- and low-resolution patches and expand the receptive field during training to better retain the cerebrovascular topology information. The encoding of our method is extended with a contextual path. We input a larger patch to enter the global encoding path, which is extracted around the same center voxel to enter the other local feature encoding path. Inspired by Kamnitsas et al. (23), the input patch is then downsampled by average pooling with 2×2×2 kernels and a stride of 2, namely, to the same dimension but half-resolution as the other encoding path. Downsampling ignores cerebrovascular details but focuses on contextual information, namely, topological information. After downsampling, the image is input to an encoding path parallel to the local feature encoding path for unique feature encoding. Different feature information is obtained by parallel encoding of the contextual path. In the decoding layer, the contribution of topological information by connecting the contextual path can facilitate the enhancement of the cerebrovascular segmentation effects.
VAEM for cerebrovascular segmentation
The VAEM can adaptively increase the weight of cerebrovascular structural voxels through the output features from the high-level side profile to better learn fine cerebrovascular features. More global information can be gradually obtained through continuous maximum pooling and convolution operations in the encoder. However, downsampling also results in the loss of many details, which means that information on small cerebral blood vessels is more likely to be lost. Inspired by Xia et al. (11), we introduced the VAEM, which is embedded between adjacent coding layers to learn vascular structure information from the feature map generated in the advanced coding layer, in which vascular structure information is obtained from deep upward sampling. The upsampling stage is shown in Eq. [1], where represents the high-level coding feature of ; h, w, and d represent the height, width, and depth of the feature graph, respectively; and c represents the number of feature graph channels. We use the three-dimensional (3D) convolution Conv 1 of 1×1×1 to obtain the single-channel features and use the 2×2×2 deconvolution with a step size of 2 to generate the same size as hierarchical resolution. The feature is finally generated:
The output feature diagram i represents deep features by activating sigmoid () to obtain deep vascular information and then adding this information to the low-level feature to obtain the enhanced vascular features:
The feature selection module proposed by Xia et al. (11) gives greater weight to features significantly correlated with vascular-like structures, which ultimately favors the segmentation task. The method is shown in Figure 2.
![Click on image to zoom](http://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/133484/public/133484-PB5-3805-R1.jpg/w300)
Statistical analysis
The proposed DVNet is implemented in PyTorch 1.8.0 (24) and works on NVIDIA GeForce RTX 3090 GPUs with CUDA 11.4 (NVIDIA, Santa Clara, CA, USA). Five-fold cross-validation is performed for the proposed method. There is no data overlap between the training set and the testing set. The initial learning rate is set to 0.001 and is reduced by half every 50 epochs; the batch size is 16, and the number of training epochs is 500. The Adam optimizer (25) is employed for parameter optimization. To quantitatively evaluate the segmentation performance of different methods and demonstrate the advantages of the proposed DVNet, we used volume-based evaluation metrics, including sensitivity, specificity, Dice coefficient, the 95th percentile Hausdorff distance (HD95) and mean Intersection over Union (IoU). Six different methods, including two state-of-the-art 3D medical image segmentation networks: 3D U-Net (UNet) (26) and V-Net (VNet) (27), and three state-of-the-art 3D cerebrovascular image segmentation networks: BraveNet (10), DeepMedic (23), endoplasmic reticulum (ER) Net (11), and Swin-U-Net transformer (UNETR) (28), were compared with our proposed method using Wilcoxon signed-rank test. Ablation experiments were conducted by removing one or two modules, including the DCP and VAEM, to compare the segmentation efficacy of DVNet.
Results
Qualitative segmentation analysis
The qualitative visualization results of the different methods are shown in Figures 3-5. The DVNet demonstrated strong integrity in accurately tracing meandering arteries and segmenting distal small arteries based on their geometric shapes. The red lattices in Figure 3 demonstrate the superior coherence of DVNet in segmenting cerebrovascular bifurcations and curved branches compared to alternative methods that exhibit a breakage phenomenon. Similarly, the red lattices in Figure 4 suggest that DVNet can effectively segment distal small arteries, in contrast to other methods that yield less distinct segmentation outcomes. Figure 5 illustrates varying results on slices from MIDAS-I and MIDAS-II, with DVNet exhibiting strong segmentation performance at weak intensity points. Although all methods demonstrated some level of performance, the other methods proposed previously tended to exhibit disconnections and inaccuracies, particularly in the segmentation of small and curved vessels.
![Click on image to zoom](http://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/133484/public/133484-PB6-7290-R1.jpg/w300)
![Click on image to zoom](http://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/133484/public/133484-PB7-8036-R1.jpg/w300)
![Click on image to zoom](http://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/133484/public/133484-PB8-3982-R1.jpg/w300)
Quantitative segmentation analysis
The quantitative results of the different methods are shown in Table 1. On the MIDAS-I dataset, the Dice, specificity, sensitivity, HD95, and IoU values of DVNet were 0.900, 0.999, 0.920, 28.421, and 0.860, respectively, except for the sensitivity values, which were marginally lower (by 0.001) than those of ERNet. On the MIDAS-II dataset, the segmentation performances of DVNet were 0.715, 0.999, 0.711, 29.124, and 0.713 for Dice, specificity, sensitivity, HD95, and IoU, respectively. DVNet exhibited superior performance relative to other segmentation methods on both the MIDAS-I and MIDAS-II datasets, with statistical significance (P<0.05).
Table 1
Dataset | Methods | Dice | Sensitivity | Specificity | HD95 | IoU |
---|---|---|---|---|---|---|
MIDAS-I | UNet | 0.855 | 0.996 | 0.894 | 33.221 | 0.772 |
VNet | 0.867 | 0.994 | 0.899 | 33.112 | 0.810 | |
DeepMedic | 0.879 | 0.998 | 0.880 | 29.689 | 0.832 | |
BraveNet | 0.872 | 0.998 | 0.889 | 29.015 | 0.830 | |
ERNet | 0.889 | 0.998 | 0.921a | 28.834 | 0.841 | |
UNETR | 0.889 | 0.997 | 0.920 | 28.671 | 0.843 | |
DVNet | 0.900*a | 0.999a | 0.920 | 28.421a | 0.860a | |
MIDAS-II | UNet | 0.645 | 0.996 | 0.655 | 31.146 | 0.661 |
VNet | 0.653 | 0.993 | 0.689 | 30.534 | 0.673 | |
DeepMedic | 0.669 | 0.998 | 0.699 | 29.779 | 0.697 | |
BraveNet | 0.661 | 0.998 | 0.699 | 29.423 | 0.685 | |
ERNet | 0.710 | 0.999a | 0.713a | 29.642 | 0.698 | |
UNETR | 0.700 | 0.998 | 0.710 | 29.533 | 0.698 | |
DVNet | 0.715*a | 0.999a | 0.711 | 29.124a | 0.713a |
*, P<0.05 with Wilcoxon signed-rank test between DVNet and other methods. a, the best results. HD95, the 95th percentile Hausdorff distance; IoU, mean Intersection over Union; UNETR, Swin-U-Net transformer; DVNet, deep vascular network; ERNet, endoplasmic reticulum Net; MIDAS, MR Brain Images of Healthy Volunteers; MR, magnetic resonance.
Ablation experiment
Table 2 demonstrates the segmentation performance of the different modules in the ablation experiments. The segmentation performance of the complete DVNet model with the DCP and VAEM modules was significantly better than that of models built with an individual or one module in the MIDAS-I dataset. Following the addition of DCP, there was a decrease in the HD95, with 28.345 in DVNet with DCP and 28.421 in DVNet with the DCP and VAEM modules. Implementing the VAEM module significantly enhanced the Dice coefficient of cerebrovascular segmentation, with 0.881 in DVNet with the VAEM and 0.900 in DVNet with the DCP and VAEM modules.
Table 2
Methods | Dice | Sensitivity | Specificity | HD95 | IoU |
---|---|---|---|---|---|
DVNet | 0.855 | 0.996 | 0.894 | 33.221 | 0.825 |
DVNet + DCP | 0.872 | 0.998 | 0.889 | 28.345 | 0.846 |
DVNet + VAEM | 0.881 | 0.998 | 0.922a | 28.532 | 0.851 |
DVNet + DCP + VAEM | 0.900a | 0.999a | 0.920 | 28.421a | 0.860a |
a, the best results. HD95, the 95th percentile Hausdorff distance; IoU, mean Intersection over Union; VAEM, vascular attention enhancement module; DCP, dual contextual path; DVNet, deep vascular network; MIDAS, MR Brain Images of Healthy Volunteers; MR, magnetic resonance.
External validation
The segmentation performances of the models using the external validation and test datasets (MIDAS-II dataset) were similar (Figure 6). The sensitivity in the external validation cohort was slightly lower (0.998) than that in the test cohort, with a slightly greater specificity (0.733). The Dice coefficient in the external validation was 0.755, and the IoU was 0.730.
![Click on image to zoom](http://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/133484/public/133484-PB9-5682-R1.jpg/w300)
Discussion
Efficient and precise cerebral vasculature segmentation on MRA images is crucial for both clinical practice and research. Our proposed method utilizes DVNet with DCP and VAEM to enhance cerebrovascular segmentation by integrating complex cerebrovascular geometry and topology. The DCP effectively preserves cerebrovascular topology information, improving the continuity of cerebrovascular structures. The VAEM efficiently learns cerebrovascular features, enhancing fine cerebrovascular segmentation performance. Furthermore, the external validation dataset results validate the effectiveness of our method, demonstrating its superiority over existing approaches, especially in the segmentation of cerebral distal small arteries.
Given the limited representation of cerebrovascular structures in MRA images and the intricate geometry and topology of cerebrovascular vessels, accurate cerebrovascular structure segmentation, particularly distal small arteries, poses a significant challenge (29). The main branch of the cerebrovascular system gives rise to numerous winding vascular branches, which pose challenges for existing segmentation methods due to the significant anatomical variability in their distal extensions, making reliable extraction and segmentation difficult (7). Recently, DL-based methods have performed well in 3D medical image analysis. Çiçek et al. (30) extended 2D UNet to 3D UNet and replaced all 2D operations with 3D counterparts to segment 3D volume medical images; this approach has become the basic model for segmented 3D volume data. Hilbert et al. (10) developed and validated a novel DL model for vascular segmentation on a large, aggregated dataset of patients with cerebrovascular disease, creating BraveNet, the performance of which surpassed that of the other models for arterial brain vessel segmentation, with 0.931 Dice and 29.153 HD95. In our study, we proposed DVNet with DCP and VAEM for cerebrovascular segmentation and achieved a 0.900 Dice and 0.860 IoU values in the MIDAS-I dataset and 0.715 Dice and 0.713 IoU values in the MIDAS-II dataset.
UNet and VNet are established medical image segmentation techniques that have demonstrated strong performance across a range of medical imaging applications (31-34). DeepMedic and BraveNet are well-known approaches for multiscale cerebrovascular segmentation that effectively delineate intricate blood vessels (35-38). ERNet has notably achieved superior results in cerebrovascular segmentation by incorporating an edge feature enhancement module (11). Therefore, we also compared DVNet to these five networks. On the MIDAS-I dataset, DVNet demonstrated the best segmentation performance, with the exception of the sensitivity values, which were marginally lower by 0.001 compared to those of ERNet, effectively supporting the effectiveness of our method for cerebrovascular segmentation. On the MIDAS-II dataset, the segmentation performance of DVNet was notable, the specificity reached a staggering value of 0.999, and the other indicators were essentially the best, effectively demonstrating the generalizability of our method. The qualitative visualization results showed that DVNet has a strong visual advantage in any respect: for winding veins, our method shows coherence, and for small vessels at the end, our method can segment the complete geometry. In addition, our study also showed that DVNet is more coherent in cerebrovascular bifurcations and curved branches, whereas the other methods exhibited a breakage phenomenon. In particular, DVNet can also segment cerebral distal small arteries, whereas other methods have no obvious segmentation results.
Additionally, we performed ablation experiments on the two modules of the double context path and vascular attention enhancement on the MIDAS-I dataset. Our findings suggested that both modules significantly improved the cerebrovascular segmentation performance. Utilizing the DCP, we successfully mitigated the HD95 of the segmentation performance, demonstrating enhanced accuracy in delineating the comprehensive cerebrovascular context. Integration of the VAEM led to a notable enhancement in the Dice score for cerebrovascular segmentation, confirming its efficacy in augmenting cerebrovascular structure characteristics and achieving precise segmentation. Consequently, our approach effectively captures the geometric and topological features of intricate cerebrovascular structures, resulting in significantly improved segmentation performance. In addition, the performance of DVNet on external MRA images was robust and adequate.
This study has several limitations. First, cerebrovascular segmentation continues to encounter challenges that warrant further enhancement in future research. DVNet demonstrated superior performance compared to the other methods. However, there remains room for improvement in accurately segmenting fine vessels compared to ground-truth data. Future studies are needed to add different levels of feature coding with different scales to better learn the features of distal fine vessels. Additionally, implementing our proposed DL algorithm for cerebrovascular segmentation necessitates the availability of annotated data that align with the complexity of cerebrovascular structures. The algorithm imposes stringent demands on both the quantity and quality of data, presenting challenges in the annotation process. To address this issue, we will explore strategies such as data enhancement, semisupervised, transfer, and active learning to enhance the performance of the algorithm.
Conclusions
The DVNet with the DCP and VAEM can enhance cerebrovascular segmentation by incorporating intricate geometric and topological features, especially in small distal vessels. DVNet can be seamlessly integrated with a variety of segmentation backbones, providing better versatility and applicability. Therefore, the DVNet with DCP and VAEM has great potential for segmentation and quantitative analysis of cerebral small distal vessels in both clinical application and research.
Acknowledgments
The MR brain images from healthy volunteers used in this paper were collected and made available by the CASILab at The University of North Carolina at Chapel Hill and were distributed by the MIDAS Data Server at Kitware, Inc.
Footnote
Funding: This work was funded by
Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://qims.amegroups.com/article/view/10.21037/qims-24-1514/coif). The authors have no conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. This study was conducted in accordance with the Declaration of Helsinki (as revised in 2013) and was approved by the Ethics Committee of the Nanjing Medical University (No. 2019-664). The requirement for individual consent was waived by the Ethics Committee due to the retrospective nature of the analysis.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Zhang K, Chen Z, Chen L, Canton G, Geleri DB, Chu B, Guo Y, Hippe DS, Pimentel KD, Balu N, Hatsukami TS, Yuan C. Alterations in cerebral distal vascular features and effect on cognition in a high cardiovascular risk population: A prospective longitudinal study. Magn Reson Imaging 2023;98:36-43. [Crossref] [PubMed]
- Chen Z, Liu W, Balu N, Chen L, Ortega D, Huang X, Hatsukami TS, Yang J, Yuan C. Associations of Intracranial Artery Length and Branch Number on Time-of-Flight MRA With Cognitive Impairment in Hypertensive Older Males. J Magn Reson Imaging 2024;60:1720-8. [Crossref] [PubMed]
- Torres-Simon L, Del Cerro-León A, Yus M, Bruña R, Gil-Martinez L, Dolado AM, Maestú F, Arrazola-Garcia J, Cuesta P. Decoding the best automated segmentation tools for vascular white matter hyperintensities in the aging brain: a clinician's guide to precision and purpose. Geroscience 2024;46:5485-504. [Crossref] [PubMed]
- Pal SC, Toumpanakis D, Wikstrom J, Ahuja CK, Strand R, Dhara AK. Multi-Level Residual Dual Attention Network for Major Cerebral Arteries Segmentation in MRA Toward Diagnosis of Cerebrovascular Disorders. IEEE Trans Nanobioscience 2024;23:167-75. [Crossref] [PubMed]
- Yang C, Zhang H, Chi D, Li Y, Xiao Q, Bai Y, Li Z, Li H, Li H. Contour attention network for cerebrovascular segmentation from TOF-MRA volumetric images. Med Phys 2024;51:2020-31. [Crossref] [PubMed]
- Xia L, Xie Y, Wang Q, Zhang H, He C, Yang X, Lin J, Song R, Liu J, Zhao Y. A nested parallel multiscale convolution for cerebrovascular segmentation. Med Phys 2021;48:7971-83. [Crossref] [PubMed]
- Zhao Y, Zheng Y, Liu Y, Zhao Y, Luo L, Yang S, Na T, Wang Y, Liu J. Automatic 2-D/3-D Vessel Enhancement in Multiple Modality Images Using a Weighted Symmetry Filter. IEEE Trans Med Imaging 2018;37:438-50. [Crossref] [PubMed]
- Guo Z, Feng J, Lu W, Yin Y, Yang G, Zhou J. Cross-modality cerebrovascular segmentation based on pseudo-label generation via paired data. Comput Med Imaging Graph 2024;115:102393. [Crossref] [PubMed]
- Chen C, Zhou K, Wang Z, Zhang Q, Xiao R. All answers are in the images: A review of deep learning for cerebrovascular segmentation. Comput Med Imaging Graph 2023;107:102229. [Crossref] [PubMed]
- Hilbert A, Madai VI, Akay EM, Aydin OU, Behland J, Sobesky J, Galinovic I, Khalil AA, Taha AA, Wuerfel J, Dusek P, Niendorf T, Fiebach JB, Frey D, Livne M. BRAVE-NET: Fully Automated Arterial Brain Vessel Segmentation in Patients With Cerebrovascular Disease. Front Artif Intell 2020;3:552258. [Crossref] [PubMed]
- Xia L, Zhang H, Wu Y, Song R, Ma Y, Mou L, Liu J, Xie Y, Ma M, Zhao Y. 3D vessel-like structure segmentation in medical images by an edge-reinforced network. Med Image Anal 2022;82:102581. [Crossref] [PubMed]
- Verma R, Kumar N, Patil A, Kurian NC, Rane S, Graham S, et al. MoNuSAC2020: A Multi-Organ Nuclei Segmentation and Classification Challenge. IEEE Trans Med Imaging 2021;40:3413-23. [Crossref] [PubMed]
- Zunair H, Hamza A. Masked Supervised Learning for Semantic Segmentation. ArXiv 2022;abs/2210.00923. doi:
10.48550 /arXiv.2210.00923. - Zunair H, Ben Hamza A. Melanoma detection using adversarial training and deep transfer learning. Phys Med Biol 2020;65:135005. [Crossref] [PubMed]
- Zunair H, Hamza AB. Synthesis of COVID-19 chest X-rays using unpaired image-to-image translation. Soc Netw Anal Min 2021;11:23. [Crossref] [PubMed]
- Choi W, Cha YJ. SDDNet: Real-Time Crack Segmentation. IEEE Transactions on Industrial Electronics 2020;67:8016-25. [Crossref]
- Kang DH, Cha YJ. Efficient attention-based deep encoder and decoder for automatic crack segmentation. Struct Health Monit 2022;21:2190-205. [Crossref] [PubMed]
- Lewis J, Cha YJ, Kim J. Dual encoder-decoder-based deep polyp segmentation network for colonoscopy images. Sci Rep 2023;13:1183. [Crossref] [PubMed]
- Heckel F, Schwier M, Peitgen HO, editors. Object-oriented application development with MeVisLab and Python. GI Jahrestagung 2009;154:1338-51.
- Tuduki Y, Murase K, Izumida M, Miki H, Kikuchi K, Murakami K, Ikezoe J. Automated seeded region growing algorithm for extraction of cerebral blood vessels from magnetic resonance angiographic data. Proceedings of the 22nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Cat No00CH37143) 2000;3:1756-9.
- Aylward SR, Bullitt E. Initialization, noise, singularities, and scale in height ridge traversal for tubular object centerline extraction. IEEE Trans Med Imaging 2002;21:61-75. [Crossref] [PubMed]
- Avants BB, Tustison NJ, Stauffer M, Song G, Wu B, Gee JC. The Insight ToolKit image registration framework. Front Neuroinform 2014;8:44. [Crossref] [PubMed]
- Kamnitsas K, Ledig C, Newcombe VFJ, Simpson JP, Kane AD, Menon DK, Rueckert D, Glocker B. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal 2017;36:61-78. [Crossref] [PubMed]
- Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, Desmaison A, Köpf A, Yang E, DeVito Z, Raison M, Tejani A, Chilamkurthy S, Steiner B, Fang L, Bai J, Chintala S. PyTorch: An Imperative Style, High-Performance Deep Learning Library. ArXiv 2019. doi:
10.48550 /arXiv.1912.01703. - Grant AM, Ashford SJ. The dynamics of proactivity at work. Research in Organizational Behavior 2008;28:3-34. [Crossref]
- Melerowitz L, Sreenivasa S, Nachbar M, Stsefanenka A, Beck M, Senger C, Predescu N, Ullah Akram S, Budach V, Zips D, Heiland M, Nahles S, Stromberger C. Design and evaluation of a deep learning-based automatic segmentation of maxillary and mandibular substructures using a 3D U-Net. Clin Transl Radiat Oncol 2024;47:100780. [Crossref] [PubMed]
- Xu X, Du L, Yin D. Dual-branch feature fusion S3D V-Net network for lung nodules segmentation. J Appl Clin Med Phys 2024;25:e14331. [Crossref] [PubMed]
- Pecco N, Della Rosa PA, Canini M, Nocera G, Scifo P, Cavoretto PI, Candiani M, Falini A, Castellano A, Baldoli C. Optimizing Performance of Transformer-based Models for Fetal Brain MR Image Segmentation. Radiol Artif Intell 2024;6:e230229. [Crossref] [PubMed]
- Gao X, Uchiyama Y, Zhou X, Hara T, Asano T, Fujita H. A fast and fully automatic method for cerebrovascular segmentation on time-of-flight (TOF) MRA image. J Digit Imaging 2011;24:609-25. [Crossref] [PubMed]
- Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O, editors. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In: Ourselin S, Joskowicz L, Sabuncu M, Unal G, Wells W. editors. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016. MICCAI 2016. Lecture Notes in Computer Science(), vol 9901. Springer, Cham.
- Morris DM, Wang C, Papanastasiou G, Gray CD, Xu W, Sjöström S, Badr S, Paccou J, Semple SI, MacGillivray T, Cawthorn WP. A novel deep learning method for large-scale analysis of bone marrow adiposity using UK Biobank Dixon MRI data. Comput Struct Biotechnol J 2024;24:89-104. [Crossref] [PubMed]
- Konell HG, Junior LOM, Dos Santos AC, Salmon CEG. Assessment of U-Net in the segmentation of short tracts: Transferring to clinical MRI routine. Magn Reson Imaging 2024;111:217-28. [Crossref] [PubMed]
- Zhu J, Bolsterlee B, Chow BVY, Song Y, Meijering E. Hybrid dual mean-teacher network with double-uncertainty guidance for semi-supervised segmentation of magnetic resonance images. Comput Med Imaging Graph 2024;115:102383. [Crossref] [PubMed]
- Song Z, Wu H, Chen W, Slowik A. Improving automatic segmentation of liver tumor images using a deep learning model. Heliyon 2024;10:e28538. [Crossref] [PubMed]
- Vossough A, Khalili N, Familiar AM, Gandhi D, Viswanathan K, Tu W, Haldar D, Bagheri S, Anderson H, Haldar S, Storm PB, Resnick A, Ware JB, Nabavizadeh A, Fathi Kazerooni A. Training and Comparison of nnU-Net and DeepMedic Methods for Autosegmentation of Pediatric Brain Tumors. AJNR Am J Neuroradiol 2024;45:1081-9. [Crossref] [PubMed]
- Raut P, Baldini G, Schöneck M, Caldeira L. Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors. Front Radiol 2023;3:1336902. [Crossref] [PubMed]
- Patel TR, Pinter N, Sarayi SMMJ, Siddiqui AH, Tutino VM, Rajabzadeh-Oghaz H. Automated Cerebral Vessel Segmentation of Magnetic Resonance Imaging in Patients with Intracranial Atherosclerotic Diseases. Annu Int Conf IEEE Eng Med Biol Soc 2021;2021:3920-3. [Crossref] [PubMed]
- Patel TR, Patel A, Veeturi SS, Shah M, Waqas M, Monteiro A, Baig AA, Pinter N, Levy EI, Siddiqui AH, Tutino VM. Evaluating a 3D deep learning pipeline for cerebral vessel and intracranial aneurysm segmentation from computed tomography angiography-digital subtraction angiography image pairs. Neurosurg Focus 2023;54:E13. [Crossref] [PubMed]