Automated segmentation and quantitative measurement of cervical nerves in ultrasound images using an SZJ-SEG-based deep learning framework
Original Article

Automated segmentation and quantitative measurement of cervical nerves in ultrasound images using an SZJ-SEG-based deep learning framework

Zheyuan Zhang1#, Wen Cao2#, Haomei Luan1, Jie Lin1, Xiaoyang Zhu3, Zongliang Yan3, Jichen Wang1, Huabin Zhang1, Ruijun Guo2, Zhiyong Bai1

1Department of Ultrasound, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China; 2Department of Ultrasound Medicine, Beijing Chaoyang Hospital, Capital of Medical University, Beijing, China; 3Beijing Vision Perception Artificial Intelligence Laboratory, Beijing, China

Contributions: (I) Conception and design: Z Zhang, X Zhu, Z Yan; (II) Administrative support: Z Bai, R Guo; (III) Provision of study materials or patients: H Luan, J Lin, J Wang, H Zhang; (IV) Collection and assembly of data: W Cao, H Luan, J Lin, J Wang, H Zhang; (V) Data analysis and interpretation: X Zhu, Z Yan, W Cao, Z Zhang; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

#These authors contributed equally to this work.

Correspondence to: Ruijun Guo, MD. Department of Ultrasound Medicine, Beijing Chaoyang Hospital, Capital of Medical University, No. 8 Gongti South Road, Chaoyang District, Beijing 100020, China. Email: ruijunguo@126.com; Zhiyong Bai, MD. Department of Ultrasound, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, No. 168 Litang Road, Changping District, Beijing 102218, China. Email: zhiyongbai@sina.com.

Background: Accurate recognition and quantitative assessment of cervical nerves are essential for ultrasound-guided nerve block, preoperative evaluation, and diagnosis of peripheral neuropathies. However, manual interpretation and measurement of nerve structures remain time-consuming and operator dependent. This study aimed to develop an intelligent, fully automated system for precise segmentation and quantitative measurement of cervical nerves in ultrasound images using a deep learning-based approach.

Methods: Ultrasound images from 200 healthy volunteers were collected and meticulously annotated to construct a large-scale standardized dataset comprising 117,729 images. A fully automated analysis framework was designed, incorporating a YOLOv11 network for precise localization of the effective imaging region [mean intersection over union (mIoU) =0.99] and an optical character recognition (OCR) module for automatic extraction of depth scale information. The proposed segmentation network, termed SZJ-SEG, combines a Deconv Block and an efficient upsampling convolution block (EUCB) based on a ResNet50 backbone to achieve high segmentation accuracy with low computational cost. Quantitative analysis of the C5–C7 nerve roots was performed through pixel-to-physical scale calibration for cross-sectional area (CSA) and perimeter calculation.

Results: On a clinical dataset comprising 117,729 images (training/validation/test =94,183/11,773/11,773), SZJ-SEG achieved mIoU values of 0.9124, 0.9109, and 0.9041 for the C5, C6, and C7 nerves, respectively, and 0.9227 for the continuous brachial plexus section. The mean absolute error (MAE) of CSA measurement ranged from 0.278 to 0.442 mm2, with mean absolute percentage error (MAPE) between 2.43% and 6.16%, and Pearson correlation coefficients exceeding 0.96. For perimeter measurements, MAE ranged from 0.374 to 0.471 mm, MAPE from 3.05% to 4.49%, and Pearson R from 0.84 to 0.91, indicating excellent consistency with manual annotations.

Conclusions: The proposed SZJ-SEG-based system enables accurate, efficient, and reproducible segmentation and quantitative analysis of cervical ultrasound images. With strong performance and high clinical relevance, it provides a reliable tool for ultrasound-guided nerve block localization and preoperative assessment. Its modular architecture offers scalability for extension to other anatomical regions, highlighting broad potential in intelligent ultrasound diagnosis.

Keywords: Cervical ultrasound; deep learning; medical image segmentation; nerve identification; You Only Look Once, version 11 (YOLOv11)


Submitted Nov 14, 2025. Accepted for publication Jan 28, 2026. Published online Feb 11, 2026.

doi: 10.21037/qims-2025-aw-2434


Introduction

Ultrasound imaging is widely used for the evaluation of soft tissues because of its advantages of real-time visualization, noninvasiveness, and accessibility (1,2). In particular, ultrasound-guided nerve identification and block procedures have become essential in anesthesia, pain management, and preoperative assessment (3-6). However, accurate recognition and quantitative analysis of cervical nerves remain challenging. The sonographic appearance of nerves varies considerably across individuals, and their boundaries are often obscured by surrounding muscles and vessels. Manual measurement of nerve parameters such as cross-sectional area (CSA) and perimeter is still the clinical standard, but it is time-consuming, operator dependent, and prone to variability (7,8).

The growing need for precision and standardization in ultrasound-guided procedures has driven the development of intelligent image analysis tools (9,10). Deep learning has recently achieved remarkable success in object detection and medical image segmentation, enabling automated interpretation of complex anatomical structures. Several studies have applied convolutional neural networks to musculoskeletal or peripheral nerve ultrasound with encouraging results (11-16). Nevertheless, cervical ultrasound imaging poses unique challenges—low tissue contrast, speckle noise, and overlapping structures often lead to inaccurate segmentation and unstable performance (17,18). Moreover, few existing approaches incorporate spatial calibration between pixel and physical scales, which is essential for quantitative measurement.

To address these limitations, we developed an end-to-end deep learning framework for automated segmentation and quantitative measurement of cervical nerves in ultrasound images. The proposed system integrates three functional modules to enable end-to-end automation. A You Only Look Once, version 11 (YOLOv11)-based detection module is used to localize the effective imaging region, while an optical character recognition (OCR) module is incorporated as an auxiliary component to automatically extract depth-scale information for pixel-to-physical calibration. The core methodological contribution of this study lies in the proposed SZJ-based segmentation (SZJ-SEG) network and the automated quantitative analysis framework, which together enable accurate and reproducible measurement of cervical nerve morphology.

The purpose of this study was to develop and validate the proposed SZJ-SEG system using a large, expertly annotated cervical ultrasound dataset. We aimed to evaluate its performance in nerve segmentation and quantitative measurement compared with expert manual annotations, and to explore its potential clinical value in improving the accuracy and standardization of ultrasound-guided cervical nerve assessment. We present this article in accordance with the TRIPOD+AI reporting checklist (available at https://qims.amegroups.com/article/view/10.21037/qims-2025-aw-2434/rc).


Methods

Data acquisition

The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. The study was approved by the Ethics Committee of Beijing Tsinghua Changgung Hospital (No. 24455-0-01). Written informed consent was obtained from all participants prior to enrollment. This was a prospective data collection study. Cervical ultrasound images were acquired from 200 healthy adult volunteers (102 men and 98 women) at Beijing Tsinghua Changgung Hospital between September 2024 and March 2025. The inclusion criteria included: (I) age between 18 and 70 years; (II) no history of cervical surgery or trauma; (III) no clinical symptoms or diagnosed history of peripheral neuropathy, radiculopathy, or brachial plexus pathology; and (IV) body mass index (BMI) between 18.5 and 28.0 kg/m2 to ensure adequate ultrasound image quality. The exclusion criteria were as follows: (I) ultrasound images with poor image quality that did not allow reliable visualization of cervical nerves, including severe artifacts, excessive acoustic shadowing, or motion-related distortion; and (II) presence of obvious anatomical variations or nerve developmental anomalies in the cervical region that significantly interfered with standard identification and delineation of cervical nerve structures.

All scans were performed by sonographers with more than 5 years of clinical experience using high-resolution ultrasound systems (GE LOGIQ E10 and Philips EPIQ 7) equipped with linear array probes.

The imaging protocol followed standardized procedures covering the fifth (C5), sixth (C6), and seventh (C7) cervical nerve roots and continuous sections of the brachial plexus. The collected images included transverse and oblique sections showing nerves, vessels, muscles, and bony landmarks.

In total, 117,729 cervical ultrasound images were collected from the 200 healthy volunteers and subsequently used for dataset construction, annotation, training, validation, and testing.

Annotation protocol

To ensure high-quality reference data, all images were annotated using a multi-stage review process (“initial labeling-verification-final approval”). Each image was independently labeled by trained annotators and verified by expert sonographers. To avoid assessment bias, the experts performing the manual annotations and measurements were blinded to the results generated by the automated SZJ-SEG system throughout the entire process. Semantic segmentation masks were generated for key anatomical structures, including nerves, muscles, blood vessels, bones, and ligaments.

Annotations were performed at the pixel level using an in-house labeling platform. The labeling criteria were based on cervical anatomical characteristics, ensuring clear delineation of structure boundaries and consistency across images. This dataset provides a robust foundation for training and evaluating the proposed segmentation models.

Data augmentation and preprocessing

To enhance model robustness under various imaging conditions, multiple data augmentation strategies were applied during training. These included random horizontal flipping, affine transformations (translation ±12%, scaling ±12%, rotation ±15°), and brightness/contrast adjustments within the range of –0.3 to +0.5.

Following augmentation, images were standardized through a unified preprocessing pipeline. All images were resized to 512×512 pixels using bicubic interpolation and normalized based on the mean and standard deviation of the training set. The data were then converted to PyTorch tensors in the channel-height-width (CHW) format for network input compatibility.

Dataset construction

The final dataset contained approximately 117,729 ultrasound images from the 200 subjects, divided into training, validation, and test sets by subject to prevent data leakage. Specifically, the dataset included 94,183 training, 11,773 validation, and 11,773 test images.

Four major anatomical regions were covered: C5, C6, C7 nerve roots, and continuous brachial plexus sections. The dataset maintained balanced sample distributions across these categories. A summary of dataset composition is presented in Tables 1,2.

Table 1

Composition of the standard plane recognition dataset

Anatomical section Training Validation Test
C5 nerve root 23,243 2,905 2,905
C6 nerve root 23,437 2,929 2,929
C7 nerve root 24,219 3,027 3,027
Continuous brachial plexus 23,283 2,910 2,911

Table 2

Ultrasound standard cut planes and corresponding anatomical structures of the cervical nerves

Plane name Tissue segmentation-distinguished name
C5 nerve root; C6 nerve root; C7 nerve root C5, CST, TP, ATTP, PTTP, CVB, AS, MS/PS, SCM, LCM, CA, VA, TG, Tr, Eso
Continuous brachial plexus BP-ISG, BP-SCL, AS, MS/PS, SCM, SCA, 1stR

1stR, first rib; AS, anterior scalene; ATTP, anterior tubercle of transverse process; BP, brachial plexus; BP-ISG, BP-interscalene groove level; BP-SCL, BP-supraclavicular level; C5, C5 nerve root; CA, carotid artery; CS, cervical sympathetic; CST, cervical sympathetic trunk; CVB, cervical vertebral body; Eso, esophagus; LCM, longus colli muscle; MS, middle scalene; PS, posterior scalene; PTTP, posterior tubercle of transverse process; SCA, subclavian artery; SCM, sternocleidomastoid muscle; TG, thyroid gland; TP, transverse process; Tr, trachea; VA, vertebral artery.

Model architecture

A lightweight and high-performance medical image segmentation network, termed SZJ-SEG, was developed. The model adopts a classic encoder-decoder architecture that integrates two novel modules to enhance accuracy and efficiency: the Deconv Block and the efficient upsampling convolution block (EUCB).

  • Encoder: the encoder consists of four hierarchical stages (Stage 1–Stage 4) based on a ResNet50 backbone. These modules progressively extract and downsample multi-scale semantic features from the input ultrasound images while preserving spatial details critical for fine structure recognition.
  • Decoder: the decoder reconstructs the spatial resolution of the encoded features through upsampling and feature fusion. The Deconv Block restores high-resolution spatial information while modeling nonlinear semantic dependencies using a combination of instance normalization, a deconvolutional mixer, and multilayer perceptrons (MLPs). The EUCB module consists of 2× upsampling, depthwise separable convolutions, batch normalization, and 1×1 pointwise convolution. This design enables efficient feature refinement with reduced computational load.

Low-level feature maps are concatenated with high-level semantic maps at each decoding stage to enhance boundary and texture details. A deep supervision strategy is used by generating intermediate outputs at multiple resolutions to stabilize training and accelerate convergence (Figure 1).

Figure 1 Network architecture diagram of the segmentation model. BN, batch normalization; Conv, convolution; DWC, depthwise convolution; EUCB, efficient upsampling convolution block; MLP, multilayer perceptron; ReLU, rectified linear unit; SZJ-SEG, SZJ-based segmentation network.

Effective region detection and spatial calibration

Before segmentation, an auxiliary detection module was used to automatically locate the effective imaging region and exclude irrelevant background. A YOLOv11 object detection network was trained to automatically localize the effective imaging region and exclude irrelevant background areas. The design objective of this module was to achieve highly accurate and consistent region localization across different ultrasound images, thereby ensuring spatial uniformity for subsequent segmentation and quantitative analysis.

To enable quantitative measurement, an OCR engine (Tesseract) was incorporated to extract depth scale markers displayed on the ultrasound images. The extracted physical depth D (in millimeters) and the corresponding image height H (in pixels) were used to calculate a scale conversion factor s (mm/pixel). This calibration ensures that subsequent measurements reflect true anatomical dimensions.

Automatic nerve area and perimeter calculation

Based on the segmentation results generated by SZJ-SEG, the CSA and perimeter of the C5–C7 nerve roots were automatically calculated. The process comprised three major steps:

  • Effective region extraction: the YOLOv11 model localized the relevant imaging area, and the image was cropped to remove black borders, text, and interface labels;
  • Scale calibration: the OCR module extracted the depth scale to determine the pixel-to-physical ratio 𝑠=𝐷/𝐻 (mm/pixel);
  • Quantitative measurement: the total number of pixels N within each segmented nerve mask was multiplied by 𝑠2 to obtain the CSA in mm2.

    A=N×s2

The perimeter was computed using contour tracing and converted into millimeters by multiplying the contour length by (Figure 2).

Figure 2 Process for calculating the area of the segmentation model. OCR, optical character recognition; SZJ-SEG, SZJ-based segmentation network; YOLOv11, You Only Look Once, version 11.

Evaluation metrics

Segmentation performance was evaluated using the mean intersection over union (mIoU) across test images. For quantitative measurement accuracy, the mean absolute error (MAE), mean absolute percentage error (MAPE), and Pearson correlation coefficient (R) between automated predictions and expert manual measurements were calculated.

Statistical analysis

All quantitative data are presented as mean ± standard deviation (SD). Statistical analyses were performed using Python (version 3.10) and SPSS (version 26.0, IBM Corp.). Agreement between automated and manual measurements was assessed using Pearson correlation and Bland-Altman analysis, with P<0.05 considered statistically significant.


Results

Performance of effective region detection

The YOLOv11-based detection module achieved highly accurate localization of the effective imaging region across all ultrasound images. On the test set, the model reached a precision of 97.2%, recall of 95.8%, and mean average precision at IoU 0.5 (mAP@0.5) of 96.4%. The mIoU for region detection was 0.99, confirming the model’s ability to consistently identify valid scanning areas and remove irrelevant background prior to segmentation. This ensured spatial uniformity and improved downstream segmentation accuracy.

Segmentation performance

The proposed SZJ-SEG network demonstrated excellent segmentation accuracy for cervical nerve structures across all tested anatomical levels. Table 3 summarizes the mIoU values obtained for each anatomical section in the training, validation, and test datasets. Overall, the SZJ-SEG model achieved an average mIoU above 0.91 across all structures, indicating highly consistent segmentation performance. Among the sections, the continuous brachial plexus region achieved the highest accuracy (mIoU =0.9227), suggesting that the model performs robustly even in anatomically complex regions.

Table 3

Segmentation performance (mIoU) across datasets

Anatomical section Training (mIoU) Validation (mIoU) Test (mIoU)
C5 nerve root 0.9124 0.9067 0.9124
C6 nerve root 0.9267 0.9112 0.9109
C7 nerve root 0.9041 0.8941 0.9041
Continuous brachial plexus 0.9392 0.9234 0.9227

mIoU, mean intersection over union.

Figure 3 illustrates representative segmentation results from different cervical levels. Nerves are displayed in yellow, arteries in red, and surrounding muscles in other colors. The boundaries of the segmented nerves closely match the ground-truth annotations, confirming precise localization and clear differentiation from adjacent tissues.

Figure 3 Segmentation results of the proposed model. (A) Segmentation results of the C5 nerve root. (B) Segmentation results of the C6 nerve root. (C) Segmentation results of the C7 nerve root. (D,E) Segmentation results of brachial plexus sections. 1stR, first rib; AS, anterior scalene; ATTP, anterior tubercle of transverse process; BP, brachial plexus; BP-ISG, BP-interscalene groove level; BP-SCL, BP-supraclavicular level; C5, C5 nerve root; CA, carotid artery; CST, cervical sympathetic trunk; CVB, cervical vertebral body; LCM, longus colli muscle; MS, middle scalene; PS, posterior scalene; PTTP, posterior tubercle of transverse process; SCA, subclavian artery; SCM, sternocleidomastoid muscle; TG, thyroid gland; VA, vertebral artery.

Quantitative measurement accuracy

The system’s capability for automatic quantitative analysis was evaluated by comparing model-generated measurements of the CSA and perimeter of the C5–C7 nerves against expert manual annotations.

As shown in Table 4, MAE for CSA ranged from 0.278 to 0.442 mm2, and MAPE was between 2.43% and 6.16%. The Pearson correlation coefficient (R) exceeded 0.96 for all nerve levels, demonstrating strong agreement with expert measurements. The system also demonstrated robust performance in predicting nerve perimeters (Table 5). The MAE ranged from 0.374 to 0.471 mm, and MAPE ranged from 3.05% to 4.49%, with Pearson correlation coefficients between 0.84 and 0.91. Although the perimeter measurements showed slightly larger errors than CSA predictions, all values remained within clinically acceptable limits. The high correlation values (R>0.84) confirmed excellent consistency between automated and manual assessments.

Table 4

Comparison of model-predicted and expert-measured CSAs

Region Mean true CSA (mm2) Mean predicted CSA (mm2) MAE (mm2) MAPE (%) Pearson R
C5-L 6.526 6.742 0.383 6.16 0.963
C5-R 8.386 8.486 0.399 4.97 0.965
C6-L 9.605 9.790 0.442 4.74 0.958
C6-R 10.472 10.676 0.376 3.76 0.971
C7-L 11.325 11.290 0.356 3.08 0.966
C7-R 11.156 11.479 0.278 2.43 0.969

CSA, cross-sectional area; L, left; MAE, mean absolute error; MAPE, mean absolute percentage error; R, right.

Table 5

Comparison of model-predicted and expert-measured nerve perimeters

Region Mean true perimeter (mm) Mean predicted perimeter (mm) MAE (mm) MAPE (%) Pearson R
C5-L 9.517 9.566 0.386 4.10 0.907
C5-R 10.651 10.761 0.471 4.49 0.843
C6-L 11.454 11.567 0.447 3.93 0.875
C6-R 11.900 11.945 0.383 3.21 0.864
C7-L 12.252 12.276 0.374 3.05 0.871
C7-R 12.502 12.328 0.396 3.15 0.841

L, left; MAE, mean absolute error; MAPE, mean absolute percentage error; R, right.

Figure 4 presents scatter plots comparing the automated and expert-measured CSA and perimeter values of the cervical nerves at different levels. The strong linear relationships (R>0.95 for CSA and R>0.84 for perimeter) confirm the reliability and consistency of the proposed method in quantitative nerve assessment across all cervical levels.

Figure 4 Correlation between automated and expert measurements of cervical nerve cross-sectional area and perimeter. Scatter plots show the relationship for the left and right sides of the C5 (A,B), C6 (C,D), and C7 (E,F) nerve roots. Each point represents one measurement pair. Gt peri, ground truth perimeter; L, left; Pred peri, predicted perimeter; R, right.

Discussion

This study developed and validated an automated deep learning-based system for the segmentation and quantitative analysis of cervical nerves in ultrasound images. The proposed framework integrates an effective region detection module, an OCR-based scale calibration component, and a novel segmentation network (SZJ-SEG) to achieve end-to-end automated measurement of nerve morphology. The results demonstrated that the system can accurately identify and measure cervical nerve structures, with segmentation accuracy exceeding an mIoU of 0.91 and quantitative errors within clinically acceptable limits. The findings indicate that the SZJ-SEG-based framework provides a reliable and efficient approach for objective, standardized nerve assessment in cervical ultrasound.

Deep learning has been increasingly applied to ultrasound image analysis in recent years. Prior research has primarily focused on organ-level segmentation tasks, such as the thyroid, liver, and musculoskeletal system (19-24). Some studies have attempted peripheral nerve segmentation using convolutional neural networks, reporting mIoU values typically ranging from 0.77 to 0.86 (17,25). In comparison, the proposed SZJ-SEG network achieved higher segmentation accuracy (mIoU >0.91) across all cervical levels, suggesting that the combination of the Deconv Block and EUCB effectively improves boundary delineation in low-contrast ultrasound environments. Unlike previous models that rely solely on pixel-wise segmentation, our framework incorporates pixel-metric calibration through OCR-based depth extraction. This addition enables direct conversion from pixel measurements to physical units (26,27), allowing quantitative analysis of CSA and perimeter. Such integration of detection, segmentation, and measurement within a single workflow represents an advancement toward fully automated ultrasound quantification.

Accurate identification and measurement of cervical nerves are crucial in several clinical applications. In ultrasound-guided regional anesthesia, precise visualization of the nerve roots and brachial plexus determines the efficacy and safety of local anesthetic delivery (28-30). Similarly, quantitative evaluation of nerve CSA and perimeter provides valuable diagnostic information for neuropathies, compression syndromes, and post-surgical monitoring (31-33). The system presented in this study offers several advantages in these contexts. It enables real-time and reproducible nerve localization, minimizing operator dependency and reducing examination time. Automated quantitative measurement can serve as an objective biomarker for nerve pathology, supporting longitudinal follow-up and treatment evaluation. Furthermore, the modular structure allows adaptation to different ultrasound systems and imaging conditions, providing flexibility for both clinical and research use.

The proposed SZJ-SEG framework offers several technical advantages. First, the YOLOv11 detection module ensures consistent identification of the effective imaging region, excluding background noise and text overlays that often compromise segmentation accuracy. Second, the OCR-based calibration module establishes an explicit mapping between image pixels and real-world dimensions, allowing reliable physical measurement without manual input. Third, the SZJ-SEG network incorporates efficient decoding and feature restoration mechanisms that enhance segmentation precision while maintaining computational efficiency. The combination of the Deconv Block and EUCB enables accurate delineation of small, low-contrast structures such as cervical nerves, which are difficult to distinguish using traditional networks. Finally, the system’s modular design allows for seamless integration into existing clinical workflows, making it suitable for both retrospective analysis and real-time applications.

Despite the encouraging results, several limitations of this study should be acknowledged. First, the dataset consisted exclusively of healthy volunteers. This design allowed us to establish a robust and standardized baseline for cervical nerve segmentation and quantitative measurement under normal anatomical conditions; however, it limits direct evaluation of the system’s diagnostic performance in patients with cervical nerve pathologies. Second, ultrasound images were acquired using only two high-end ultrasound systems (GE LOGIQ E10 and Philips EPIQ 7), which were deliberately selected to ensure stable image quality and standardized acquisition protocols. Nevertheless, restricting data acquisition to two devices may reduce generalizability to images obtained from other ultrasound platforms with different imaging characteristics. Third, cases with poor image quality, including severe artifacts or acoustic shadowing, as well as subjects with obvious anatomical variations or nerve developmental anomalies, were excluded during data acquisition to ensure reliable annotation and technical validation. As a result, the current evaluation was performed under standardized imaging conditions, and future studies should investigate system performance in more challenging scenarios, including pathological conditions, anatomical variants, and suboptimal image quality.

Future development will aim to expand the framework to additional anatomical regions and clinical scenarios. Extending the model to the thoracic outlet, shoulder, and upper limb may support comprehensive mapping of the brachial plexus and its peripheral branches. Furthermore, combining deep learning segmentation with radiomics or texture-based analysis could enable more refined characterization of nerve pathology and tissue composition (34,35). These advancements may ultimately support precision medicine approaches in neuromuscular ultrasound.


Conclusions

In summary, this study presents an intelligent, fully automated system for the segmentation and quantitative measurement of cervical nerves in ultrasound images. The proposed SZJ-SEG network achieved high segmentation accuracy and demonstrated excellent agreement with expert manual measurements of CSA and perimeter. By integrating effective region detection, spatial calibration, and deep learning-based segmentation, the system provides accurate, reproducible, and efficient quantitative analysis with minimal operator intervention. This framework shows strong potential for enhancing the standardization of cervical nerve assessment, improving the precision of ultrasound-guided nerve block procedures, and supporting objective evaluation of peripheral nerve disorders. Its modular architecture also offers a promising foundation for extension to other anatomical regions and applications in intelligent ultrasound diagnosis.


Acknowledgments

None.


Footnote

Reporting Checklist: The authors have completed the TRIPOD+AI reporting checklist. Available at https://qims.amegroups.com/article/view/10.21037/qims-2025-aw-2434/rc

Data Sharing Statement: Available at https://qims.amegroups.com/article/view/10.21037/qims-2025-aw-2434/dss

Funding: None.

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://qims.amegroups.com/article/view/10.21037/qims-2025-aw-2434/coif). The authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. The study was approved by the Ethics Committee of Beijing Tsinghua Changgung Hospital (No. 24455-0-01). Written informed consent was obtained from all participants prior to enrollment.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Yue H, Li P, Yuan L, Feng G, He L. Combination effect of ultrasound-guided superior cervical ganglion block and standard triptan for migraine attacks. J Oral Facial Pain Headache 2025;39:193-201. [Crossref] [PubMed]
  2. Zhang Z, Li X, Qi G, Zhang H, Feng X, Bai Z. Clinical Efficacy of Ultrasound Guidance in Brachial Plexus Nerve Conduction Study: A Comparative Analysis. Curr Med Imaging 2025;21:e15734056377599. [Crossref] [PubMed]
  3. Kim J, Cha J, Choi SN, Heo G, Yoo Y, Moon JY. Multicenter Prospective Randomized Comparison of Ultrasound-Guided Stellate Ganglion Versus Thoracic Paravertebral Block for Sympathetic Blockade in Chronic Upper Extremity Pain. Anesth Analg 2025;140:665-74. [Crossref] [PubMed]
  4. Wang L, Xu S, Jiang Z, He R. Ultrasound-guided thoracic paravertebral injection of dexamethasone palmitate combined with ropivacaine for the treatment of thoracic herpes zoster-related pain: protocol for a prospective, randomized controlled, single-center study. Front Pharmacol 2024;15:1470772. [Crossref] [PubMed]
  5. Pirri C, Torre DE, Behr AU, Macchi V, Porzionato A, De Caro R, Stecco C. Ultrasound-Guided Analgesia in Cardiac and Breast Surgeries: A Cadaveric Comparison of SPIP Block with Single and Double Injections vs. DPIP Block. Life (Basel) 2024.
  6. Finneran JJ 4th, Kobayashi L, Costantini TW, Weaver JL, Berndtson AE, Haines L, Doucet JJ, Adams L, Santorelli JE, Lee J, Trescot AM, Donohue MC, Schaar A, Ilfeld BM. Ultrasound-guided Percutaneous Cryoneurolysis for the Treatment of Pain after Traumatic Rib Fracture: A Randomized, Active-controlled, Participant- and Observer-masked Study. Anesthesiology 2025;142:532-45. [Crossref] [PubMed]
  7. Wu WT, Chen LR, Chang HC, Chang KV, Özçakar L. Quantitative Ultrasonographic Analysis of Changes of the Suprascapular Nerve in the Aging Population With Shoulder Pain. Front Bioeng Biotechnol 2021;9:640747. [Crossref] [PubMed]
  8. Bedewi MA, Kotb MA. Ultrasound reference values of C5, C6, and C7 brachial plexus roots at the interscalene groove. Neurol Sci 2021;42:2425-9. [Crossref] [PubMed]
  9. Madadi F, Kohzadi Z, Rahmatizadeh S, Sabouri AS, Dabbagh A. Artificial Intelligence-Driven Image and Data Analytics in Anesthesia. Anesthesiol Clin 2025;43:e1-e15. [Crossref] [PubMed]
  10. Harris J, Kamming D, Bowness JS. Artificial intelligence in regional anesthesia. Curr Opin Anaesthesiol 2025;38:605-10. [Crossref] [PubMed]
  11. Shi X, Yu T, Yuan Y, Wang D, Cui J, Bai L, Zheng F, Dai X, Du R, Chen Z, Zhou Z. Multimodal Deep Learning for Grading Carpal Tunnel Syndrome: A Multicenter Study in China. Acad Radiol 2025;32:4705-23. [Crossref] [PubMed]
  12. Moser F, Muller S, Lie T, Langø T, Hoff M. Automated segmentation of the median nerve in patients with carpal tunnel syndrome. Sci Rep 2024;14:16757. [Crossref] [PubMed]
  13. Peng J, Zeng J, Lai M, Huang R, Ni D, Li Z. One-Stop Automated Diagnostic System for Carpal Tunnel Syndrome in Ultrasound Images Using Deep Learning. Ultrasound Med Biol 2024;50:304-14. [Crossref] [PubMed]
  14. Gujarati KR, Bathala L, Venkatesh V, Mathew RS, Yalavarthy PK. Transformer-Based Automated Segmentation of the Median Nerve in Ultrasound Videos of Wrist-to-Elbow Region. IEEE Trans Ultrason Ferroelectr Freq Control 2024;71:56-69. [Crossref] [PubMed]
  15. Kawanishi K, Kakimoto A, Anegawa K, Tsutsumi M, Yamaguchi I, Kudo S. Automatic Identification of Ultrasound Images of the Tibial Nerve in Different Ankle Positions Using Deep Learning. Sensors (Basel) 2023;23:4855. [Crossref] [PubMed]
  16. Pelletier ED, Jeffries SD, Suissa N, Sarty I, Malka N, Song K, Sinha A, Hemmerling TM. From Consensus to Standardization: Evaluating Deep Learning for Nerve Block Segmentation in Ultrasound Imaging. A A Pract 2025;19:e02040. [Crossref] [PubMed]
  17. Cui H, Duan J, Lin L, Wu Q, Guo W, Zang Q, Zhou M, Fang W, Hu Y, Zou Z. DEMAC-Net: A Dual-Encoder Multiattention Collaborative Network for Cervical Nerve Pathway and Adjacent Anatomical Structure Segmentation. Ultrasound Med Biol 2025;51:1227-39. [Crossref] [PubMed]
  18. Wang Y, Zhu B, Kong L, Wang J, Gao B, Wang J, Tian D, Yao Y. BPSegSys: A Brachial Plexus Nerve Trunk Segmentation System Using Deep Learning. Ultrasound Med Biol 2024;50:374-83. [Crossref] [PubMed]
  19. Ateeq Almutairi S. Advancing thyroid diagnosis: integrating AI-driven CAD framework with numerical data and ultrasound images. PeerJ Comput Sci 2025;11:e3063. [Crossref] [PubMed]
  20. Zhan J, Zhang J, Zhu S, Ni L, Zhang C, Hu J. Diagnostic performance of ultrasound characteristics-based artificial intelligence models for thyroid nodules: a systematic review and meta-analysis. Front Oncol 2025;15:1614603. [Crossref] [PubMed]
  21. Wen W, Zhang T, Zhao H, Liu J, Jiang H, He Y, Jiang Z. Multimodal model enhances qualitative diagnosis of hypervascular thyroid nodules: integrating radiomics and deep learning features based on B-mode and PDI images. Gland Surg 2025;14:1558-71. [Crossref] [PubMed]
  22. Dana J, Meyer A, Paisant A, Rode A, Sartoris R, Séror O, Cassinotto C, Milot L, Grégory J, Cœur J, Lebigot J, Schembri V, Villeret F, Takeda AN, Ronot M, Vilgrain V, Baumert TF, Gallix B, Padoy N, Nahon P. Improving risk stratification and detection of early HCC using ultrasound-based deep learning models. JHEP Rep 2025;7:101510. [Crossref] [PubMed]
  23. Chang SF, Wu PY, Tsai MC, Tseng VS, Wang CC. AI-assisted anatomical structure recognition and segmentation via mamba-transformer architecture in abdominal ultrasound images. Front Artif Intell 2025;8:1618607. [Crossref] [PubMed]
  24. Tang R, Li Z, Jiang L, Jiang J, Zhao B, Cui L, Zhou G, Chen X, Jiang D. Development and Clinical Application of Artificial Intelligence Assistant System for Rotator Cuff Ultrasound Scanning. Ultrasound Med Biol 2024;50:251-7. [Crossref] [PubMed]
  25. Wu CH, Tsai CM, Kuo PL. From Visualization to Automation: A Narrative Review of Deep Learning's Impact on Ultrasound-based Median Nerve Assessment. J Med Ultrasound 2025;33:95-101. [Crossref] [PubMed]
  26. Ciobanu ȘG, Enache IA, Iovoaica-Rămescu C, Berbecaru EIA, Vochin A, Băluță ID, Istrate-Ofițeru AM, Comănescu CM, Nagy RD, Şerbănescu MS, Iliescu DG, Țieranu EN. Automatic Identification of Fetal Abdominal Planes from Ultrasound Images Based on Deep Learning. J Imaging Inform Med 2025;38:3984-91. [Crossref] [PubMed]
  27. Stekel SF, Long Z, Tradup DJ, Hangiandreou NJ. Use of Image-Based Analytics for Ultrasound Practice Management and Efficiency Improvement. J Digit Imaging 2019;32:251-9. [Crossref] [PubMed]
  28. Nowakowski P, Bieryło A. Ultrasound guided axillary brachial plexus plexus block. Part 1--basic sonoanatomy. Anaesthesiol Intensive Ther 2015;47:409-16. [Crossref] [PubMed]
  29. Koh K, Tatsuki O, Sakuraba S, Yamazaki S, Yako H, Omae T. Neuropathies Following an Ultrasound-Guided Axillary Brachial Plexus Block. Local Reg Anesth 2023;16:123-32. [Crossref] [PubMed]
  30. Abe S, Kondo H, Tomiyama Y, Shimada T, Bun M, Kuriyama K. Risk factors for insufficient ultrasound-guided supraclavicular brachial plexus block. J Exp Orthop 2023;10:48. [Crossref] [PubMed]
  31. Niu J, Li Y, Zhang L, Ding Q, Cui L, Liu M. Cross-sectional area reference values for sonography of nerves in the upper extremities. Muscle Nerve 2020;61:338-46. [Crossref] [PubMed]
  32. Cartwright MS, Passmore LV, Yoon JS, Brown ME, Caress JB, Walker FO. Cross-sectional area reference values for nerve ultrasonography. Muscle Nerve 2008;37:566-71. [Crossref] [PubMed]
  33. Liang W, Liu Y, Zhao Y, Chen Y, Yin Y, Zhai L, Li Z, Gong Z, Zhang J, Zhang M. Quantitative MRI Analysis of Brachial Plexus and Limb-Girdle Muscles in Upper Extremity Onset Amyotrophic Lateral Sclerosis. J Magn Reson Imaging 2024;60:291-301. [Crossref] [PubMed]
  34. Gao S, Zhu H, Wen M, He W, Wu Y, Li Z, Peng J. Prediction of femoral head collapse in osteonecrosis using deep learning segmentation and radiomics texture analysis of MRI. BMC Med Inform Decis Mak 2024;24:320. [Crossref] [PubMed]
  35. Liu Y, Wu J, Zhou J, Guo J, Liang C, Xing Y, Wang Z, Chen L, Ding Y, Ren D, Bai Y, Hu D. Identification of high-risk population of pneumoconiosis using deep learning segmentation of lung 3D images and radiomics texture analysis. Comput Methods Programs Biomed 2024;244:108006. [Crossref] [PubMed]
Cite this article as: Zhang Z, Cao W, Luan H, Lin J, Zhu X, Yan Z, Wang J, Zhang H, Guo R, Bai Z. Automated segmentation and quantitative measurement of cervical nerves in ultrasound images using an SZJ-SEG-based deep learning framework. Quant Imaging Med Surg 2026;16(3):199. doi: 10.21037/qims-2025-aw-2434

Download Citation