Deep learning-based bone suppression in chest radiographs using CT-derived features: a feasibility study
Original Article

Deep learning-based bone suppression in chest radiographs using CT-derived features: a feasibility study

Ge Ren1, Haonan Xiao1, Sai-Kit Lam1, Dongrong Yang1, Tian Li1, Xinzhi Teng1, Jing Qin2, Jing Cai1

1Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China; 2School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China

Correspondence to: Jing Cai, PhD. Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China. Email: jing.cai@polyu.edu.hk.

Background: Bone suppression of chest X-ray holds the potential to improve the accuracy of target localization in image-guided radiation therapy (IGRT). However, the training dataset for bone suppression is limited because of the scarcity of bone-free radiographs. This study aims to develop a deep learning-based bone suppression method using CT-derived features to reduce the reliance on the bone-free dataset.

Methods: In this study, 59 high-resolution lung CT scans were processed to generate the lung digital radiographs (DRs), bone DRs, and bone-free DRs, for the training and internal validation of the proposed cascade convolutional neural network (CCNN). A three-stage image processing framework (CT segmentation, DR simulation, and feature expansion) was developed to expand simulated lung DRs with different weightings of bone intensity. The CCNN consists of a bone detection network and a bone suppression network. In external validation, the trained CCNN was evaluated using 30 chest radiographs. The synthesized bone-suppressed radiographs were compared with the bone-suppressed reference in terms of peak signal-to-noise ratio (PSNR), mean absolute error (MAE), structural similarity index measure (SSIM), and Spearman’s correlation coefficient. Furthermore, the effectiveness of the proposed feature expansion method and CCNN model were assessed via the ablation experiment and replacement experiment, respectively.

Results: Evaluation on real chest radiographs showed that the bone-suppressed chest radiographs closely matched with the bone-suppressed reference, achieving an accuracy of MAE =0.0087±0.0030, SSIM =0.8458±0.0317, correlation of 0.9554±0.0170, and PNSR of 20.86±1.60. After removing the feature expansion from the CCNN model, the performance decreased in terms of MAE (0.0294±0.0093, −237.9%), SSIM (0.7747±0.0.0416, −8.4%), correlation (0.8772±0.0271, −8.2%), and PSNR (15.53±1.42, −25.5%) metrics.

Conclusions: We successfully demonstrated a novel deep learning-based bone suppression method using CT-derived features to reduce the reliance on the bone-free dataset. Implementation of the feature expansion procedures resulted in a remarkable reinforcement of the model performance. For the application of target localization in IGRT, the clinical testing of the proposed method in the context of radiation therapy is a necessary procedure to move from theory into practice.

Keywords: Bone suppression; cascade neural network; deep learning; chest radiograph; chest X-ray


Submitted Nov 03, 2020. Accepted for publication Feb 22, 2021.

doi: 10.21037/qims-20-1230


Introduction

Lung cancer is the most common cause of cancer-related death worldwide. In 2018, there were 1.7 million death cases caused by lung cancer, accounting for 11.6% of the total cancer patients worldwide (1). The large majority of lung cancer patients present with non-small cell lung cancer (NSCLC), and of these, approximately 30% present with locally advanced (Stage III) disease. The current standard treatment for locally advanced unresectable NSCLC is definitive chemo-radiotherapy followed by adjuvant immunotherapy (2,3). Image-guided radiation therapy (IGRT) for lung cancer aims to deliver a more accurate dose to the target region and reduce radiation damage to the surrounding normal tissue (4,5). To achieve this goal, different on-board imaging (OBI) methods have been applied to reduce the positioning variations of patient setup during the course of radiation therapy (6). The 2-D radiography is the most commonly used OBI for target localization before the IGRT treatment. Specifically, the X-ray generator of the OBI system delivers kilovolt (KV) photon beams, which are collected by the detector on the opposite of the radiation source. This chest radiography produced by the OBI system is used to determine the target position in relation to the landmarks for patient setup in IGRT. Compared with other OBI techniques, such as CBCT, radiography imaging has advantages in the convenience and low delivered radiation dose (7,8). However, bony structures like ribs often obscure the localization of lung tumors or landmarks, limiting the target localization accuracy in IGRT (9,10). The localization accuracy using radiography imaging has a mean difference of 6±2 mm with a maximum error of 22 mm in a study with 6,000 individual fractions (11).

To improve the setup accuracy, bone suppression in CXR images has been perceived as a promising solution (12). Previous methods for the task of bone suppression can be categorized as supervised and unsupervised methods (13). Unsupervised methods do not need chest radiography for training; however, they require segmentation of bones and then reconstruct bone-free images using the blind-source separation approach (14-17). The performance of unsupervised methods is heavily affected by segmentation accuracy. Supervised methods suppress bone structure in chest radiographs by regression prediction. With the advance of deep learning techniques, a variety of convolutional neural networks (CNNs) have achieved notable progress for the task of bone suppression. For the supervised models, the born suppressed images acquired by the dual-energy radiography system are used as model training ground truth, including multiple massive-training artificial neural network (18), filter learning (19), massive training artificial neural network (20), a cascade of multi-scale CNNs (13), frequency-specific deep neural network convolution (21), adversarial networks (22), etc.

In this study, we proposed a deep learning-based method using the features acquired from pulmonary CT for achieving bone suppression in chest radiographs. Pulmonary CT images were used to derive lung digital radiographs (DRs) and bone-free lung DRs to simulate chest radiographs and bone-free chest radiographs, respectively. The CCNN training was trained on the DR dataset to learn bone suppression features. To increase the model generalizability in real lung chest radiographs, a feature expansion approach was developed to expand lung DRs with different weightings of bone intensity. Considering the scarcity of the bone suppressed radiographs, we developed a novel digital reconstruction-based feature expansion strategy to inflate the types of features from CT images. To our best knowledge, this is the first attempt to ascertain bone suppression in chest radiographs using CT-derived features. This study provides valuable insights for relevant research studies in the future, and encourages scientists in the field of IGRT to leverage the feature expansion from CT images for training deep learning models for bone suppression to enhance localization accuracy in IGRT study.


Methods

Dataset and study design

In this study, two publicly available datasets were utilized for model training and validation. The first dataset is the Reference Image Database to Evaluate Therapy Response (RIDER) lung CT scans from The Cancer Imaging Archive (TCIA) (23,24). This dataset contains 59 high-resolution CT (HRCT) scans of the chest from non-small cell lung cancer patients. Each CT slice was reconstructed into a matrix size of 512×512 with a pixel size of 0.576×0.576 mm2 and a slice thickness of 1.25 mm. This dataset was used to derive the lung DRs, bone DRs, and bone-free DRs for the training of the model. The second dataset was collected from the Japanese Society of Radiological Technology (JSRT) (25), which includes chest radiographs from subjects with or without lung nodules. The matrix size and pixel size of the chest radiographs are 2,048×2,048 and 0.175×0.175 mm2, respectively. In this study, 30 cases (~50% of the training set) were randomly selected from the JSRT dataset for external validation of the model performance on real chest radiographs. To the best of our knowledge, there are no publicly available datasets of bone-free images acquired from the dual-energy X-ray system (26). Juhász et al. generated a group of bone suppressed images using a semi-automatic method for the JSRT dataset, which were made available at Kaggle (27) and used as the bone-suppressed reference in this study.

Figure 1 illustrates the overall study design for bone suppression in chest radiographs. Firstly, the HR lung CT images were processed to generate DRs. This image processing included CT segmentation, DR simulation, and feature expansion. Then, the expanded dataset was used for training and internal validation of the proposed CCNN. Of the expanded data, 80% was used for training, and 20% for internal validation. The performance of the proposed CCNN was evaluated using mean absolute error (MAE) and structural similarity index measure (SSIM) to measure the difference and the similarity estimation between the predicted and actual values. To evaluate the model performance on real lung DRs, external validation was also conducted on 30 chest radiographs. The lung region of the real radiograph was manually segmented and cropped to the border of the lung. Then, the cropped image was then resized to 256×256 matrices. In the external validation, we also added the Spearman’s correlation coefficient and peak signal-to-noise ratio (PSNR) to evaluate the statistical similarity and reconstruction quality, respectively. Detailed descriptions of image processing, the CCNN, and evaluating metrics were presented in the following sections. To further assess the effectiveness of the feature expansion, an ablation experiment, in which feature expansion procedures were removed, was conducted, followed by performance evaluation on the external validation dataset. The effectiveness of the proposed CCNN compared to the U-Net model was also examined by analyzing the impact on the performance.

Figure 1 The study design of the proposed method. HR, high-resolution; DR, digital radiograph; CCNN, cascade convolutional neural network.

Image processing

Figure 2 illustrates a three-stage workflow of image processing to overcome the limited accessibility of bone-free chest radiographs in the training CCNN. The three stages are described as follows.

Figure 2 Image processing for CCNN training, which consists of three major stages: CT segmentation for separating lungs and bones, DR simulation for generating bone DR and bone-free DR, and feature expansion for network training. DR, digital radiograph; CCNN, cascade convolutional neural network.

CT segmentation

To extract lung parenchyma region, lung masks in 3D CT were generated using a pretrained U-Net (R231) model (28), which was trained on multifarious lung CT scans. Bone structures were segmented by using thresholding, with a cutoff of larger than +300 HU. The bone and lung masks were subsequently applied to the CT images to generate bone CT and bone-free lung CT.

DR simulation

To generate the bone DR and corresponding bone-free lung DR, a digitally reconstructed radiograph (DRR) technique in the Insight Segmentation and Registration Toolkit (ITK) package was adopted for the simulation using Python, which is an open-source toolkit for medical image analysis and image processing. To focus on the lung regions, the lung and bone DRs were masked by another 2D lung mask generated from the bone-free lung DR images using morphologic transformation. The masked images were cropped to the border of the lung for yielding segmented bone and bone-free DR images. The cropped images were then resized to 256×256 to save calculation memory.

Feature expansion

This step aims to expand the bone intensity features in the lung DRs. To simulate different bone intensities in real DRs, a feature expansion strategy was implemented for network training, in which the segmented bone and lung DRs were integrated at varying ratios of intensity between bone DR and bone-free DR images. The strategy can be mathematically expressed as Eq. [1]:

Ilung=α·Ibone+(1α)·Ibonefree

where Ilung is the simulated lung DR, Ibone is the bone DR, Ibone−free is the bone-free lung DR, and α denotes the bone ratio between the bone DR and bone-free DR, where α is 0.5, 0.4, 0.3, 0.2, and 0.1 in this study. Examples of simulated DRs are illustrated in Figure 3. All the simulated images were resized to a matrix size of 256×256 to alleviate the computational cost.

Figure 3 Demonstration of results from the feature expansion, indicated as different bone intensity ratios in each column, for simulated lung radiographs (top row: expanded lung DRs; bottom row: bone DRs). DR, digital radiograph.

Architecture of cascade convolutional neural network (CCNN)

We proposed a CCNN for the bone suppression task, which consists of a bone detection network and a bone suppression network, as shown in Figure 4. Both networks contain an encoder with convolutional layers for extracting image features and a decoder with transpose convolutional layers for reconstructing output images. Sixteen layers of convolutions were used to learn features in DR images. Each convolution has a size of 3×3 and is coupled with batch normalization and a Parametric Rectified Linear Unit (PReLU). Similar to the U-Net architecture (29), multi-skip connections were adopted in the proposed CCNN to translate the local details captured in the feature maps from the encoding path to the decoding path, and were designed in the middle of the network to fully utilize the high-level features. To facilitate the process of bone suppression, the bone detection network was firstly used to detect the bone structure location. The output of the bone detection network was then concatenated with the original lung DR images to be the input of the second bone suppression network for bone-free DR generation.

Figure 4 The architecture of the proposed CCNN. CCNN, cascade convolutional neural network.

The proposed CCNN model learns the optimal parameters by minimizing the loss function, which is composed of two parts. The first loss was the mean square error (MSE) between the predicted and ground truth bone DR, and is expressed as Eq. [2]:

L1(Ilung)=(IboneIbone_pred)2

where Ibone_pred represents the output of the first network. The second loss was the MSE between the generated and ground truth bone-free lung DR, and is expressed as Eq. [3]:

L2(Ilung,Ibone_pred)=(Ibonefree,Ibonefree_pred)2

where Ilung_pred denotes the output of the second network. The overall loss function of CCNN was calculated using Eq. [4]:

L=L1+γL2

where γ is the weighing factor of the second network loss and was empirically set to five. During optimization, each layer was updated using error backpropagation with an adaptive moment estimation optimizer (ADAM). The CCNN was trained using lateral flips to augment the training dataset and trained for 600 epochs. The CT images and chest radiographs were processed prior to model training and validation. The initialization of the convolutional layers was configurated using the Kaiming Uniform method (30). The CNN model was built in Pytorch 1.1 framework and coded in Python. All the deep learning tasks were performed using a computer with NVIDIA GTX 2080 TI with 11GB memory.

Performance evaluation

For the internal validation, we adopted SSIM and MAE to measure the similarity approximation and intensity difference, respectively.

The SSIM contains three terms as the comparisons of three measurements between the samples of the generated image and reference image: luminance term, contrast term, and the structural term (31). The overall SSIM is a multiplicative combination of these terms as Eq. [5]:

SSIM=2μIμIpred+C1μI2+μIpred2+C1·2σIIpred+C2σI2+σIpred2+C2

where μI,  μIpred, σI,  σIpred, and  σIIpred are the local means, standard deviations, and cross-variance for image I and Ipred, respectively. C1 = (k1L)2, C2 = (k2L)2 are the two variables that stabilize the division with a weak denomination. L is the dynamic range of the pixel values. k1 =0.01 and k2 =0.03 by default. SSIM is within the range of [−1, 1] and represents the intensity monotonicity of spatially correlated voxels.

MAE was used to measure the arithmetic average of the absolute difference between the reference images and predicted images (13), calculating as Eq. [6]:

MAE=i=1n|Ipred(i)I(i)|n

where Ipred(i) is the synthesized image; I(i) is the reference image; and i is the index of the pixel.

For the external validation, the Spearman’s correlation coefficient {Eq. [7]} and PSNR {Eq. [8]} were also used to evaluate the statistical similarity and reconstruction quality, respectively.

R=i=1n[(I(i)I¯)·(Ipred(i)Ipred¯)]i=1n(I(i)I¯)2i=1n(Ipred(i)Ipred¯)2

where Ipred denotes the predicted bone-suppressed image obtained from the proposed framework, and I denotes the bone-suppressed reference. R is within the range of [−1, 1] and represents the intensity monotonicity of spatially correlated voxels.

MSE=i=1n(Ipred(i)I(i))2n

PSNR=20·log10(MAXMSE)

where MAX denotes the maximum possible pixel value of the image. MSE is the mean square error between the predicted bone-suppressed image obtained from the proposed framework and the bone-suppressed reference.


Results

Evaluation on the internal validation group

In the internal validation group, the performance of this model was evaluated on the DR dataset. The training time of the model was approximately 2.58 hours. The CCNN model achieved an average MAE of 0.0613±0.0230 and an average SSIM of 0.8856±0.0415. A biphasic trend of model performance on the internal validation set with an increasing percentage of bone intensity can be observed in Figure 5. The trained model yielded the best performance on the lung DR image with 30% bone, with an MAE value of 0.0362±0.0043, and SSIM value of 0.9266±0.0120. From the qualitative perspective in Figure 6, the absolute difference between the predicted and reference DR changed remarkably with the changing bone intensity ratio; the minimum difference was observed on the lung DR image with 30% bone structures. For images with a larger percentage of bone intensity ratio, the bone structures could not be fully suppressed from the synthesized image. For images with a lower bone ratio, while the bone structures can be easily suppressed, it simultaneously weakens the intensity of surrounding tissue, such as lung nodules, as perceived from Figure 6.

Figure 5 Quantitative evaluation of performance on the internal validation group. MAE, mean absolute error; SSIM, structural similarity index measure.
Figure 6 Qualitative evaluation of representative cases on the internal validation group.

External validation on real chest radiographs

On the real chest radiographs, the trained model achieved an accuracy of MAE =0.0087±0.0030, SSIM =0.8458±0.0317, and PSNR =20.86±1.60. The bone-suppressed radiographs had a very high correlation of 0.9554±0.0170 with the unsupervised reference. In the qualitative evaluation of the representative case, five nodules remained detectable after applying bone suppression. However, some texture details could not be observed in the 0–1 color scale (Figure 7).

Figure 7 A representative case on the external validation group. The red arrows indicate the nodules.

Overall performance analysis

Figure 8 illustrates the effectiveness of the proposed CCNN and feature expansion in image processing. After the removal of feature expansion, the CCNN models demonstrated remarkable degradation in performance. The drop in performance was severe in terms of MAE (0.0294±0.0093, −237.9%), followed by a comparative reduction in PSNR (15.53±1.42, −25.5%), SSIM (0.7747±0.0.0416, −8.4%) and correlation metrics (0.8772±0.0271, −8.2%). Compared with the proposed method, the U-Net coupled with feature expansion achieved a decreased performance in PSNR (11.6%), MAE (4.1%), SSIM (0.8%), and correlation (0.4%), with an average PSNR =18.43±2.26, MAE =0.0164±0.0092, SSIM =0.8390±0.0385, and correlation =0.9513±0.0199. After the removal of feature expansion, the performance of U-Net model was also drastically degraded in terms of MAE (0.0480±0.0191, −192.7%), followed by PSNR (13.53±1.81, −26.6%), SSIM (0.7396±0.0388, −11.8%) and correlation metrics (0.8977±0.0218, −5.6%). In the qualitative evaluation of the representative case (Figure 9), the nodule remained detectable for both scenarios.

Figure 8 Quantitative comparison of the CCNN model on the external validation group. CCNN, cascade convolutional neural network.
Figure 9 Performance comparison of CCNN and U-Net of a representative case on the external validation group. The red arrows indicate the nodules. CCNN, cascade convolutional neural network.

Discussion

In this study, we developed a novel deep learning-based bone suppression method in chest radiographs using CT-derived features. The HRCT images were processed for feature simulation prior to model training. In this processing, feature expansion was used to increase the generalizability of the proposed method. The expanded dataset was used to train and validate the proposed CCNN, which is composed of two networks, for bone detection and bone suppression, respectively. In external validation, the comparative evaluation in real chest radiographs showed that the bone-suppressed radiograph had a high approximation with the bone-suppressed reference (MAE =0.0087±0.0030, SSIM =0.8458±0.0317, correlation =0.9554±0.0170 and PSNR =20.86±1.60). Therefore, the study successfully demonstrated the feasibility of generalizing this model to OBI chest radiographs.

In image processing, we generated lung DRs and corresponding bone-free DRs with changing bone ratios to simulate the diversity of bone structures in real chest radiographs. In model evaluation of the internal validation dataset, there was a biphasic change in performance of the CCNN model with increasing bone ratios. The best performance was observed on the lung DR image with 30% bone structures. For images with a larger percentage of bone ratio, the bone structures could not be fully suppressed from the synthesized image. For images with a lower bone ratio, the bone structures can be easily suppressed without compromising surrounding tissue signals, such as lung nodules. To further analyze the impact of feature expansion, ablation experiments were conducted with CCNN and U-Net models. Removal of the feature expansion steps dramatically decreased the performance, as shown in Figure 8. This finding indicates the effectiveness and importance of feature expansion for the model generalization on the external real chest radiograph dataset.

There are two categories of CNN architectures for achieving bone suppression. The first one can be referred to as a direct synthesis, such as the U-Net, where the bone-suppressed image is directly generated from the initial input images. The second one can be regarded as a guided synthesis, such as the proposed CCNN model, where the ultimate formation of the bone-suppressed image is guided by the interim output of a bone detection network, providing location information of bone structures prior to image synthesis. In this study, we compared these two types of models. Compared with the CCNN model, the U-Net model led to degraded performance in terms of PSNR (11.6%), MAE (4.1%), correlation (0.4%), and SSIM (0.8%). U-Net worked better for recovering structure details because of the multi skip connections. However, there was a higher risk of unwanted suppression in surrounding tissues, such as lung nodules, when performing bone suppression using the U-Net model (Figure 10). Herein, we recommend the CCNN architecture since it is effective in suppressing the bone structures while preserving desirable detectability of lung nodules. In the future, we will continue to develop a more efficient CNN model, which can recover the lung nodule and detailed structure simultaneously.

Figure 10 Illustration of unwanted nodule suppression of U-Net. The red arrows indicate the nodules. CCNN, cascade convolutional neural network.

Before our study, most of the CNN models for bone suppression were trained on their own dataset acquired by the dual-energy X-ray system, achieving very promising performance with high detail recovery (22). However, for the application in IGRT, the generalizability of dual-energy X-ray based models can be influenced by the heterogeneity in radiograph systems and imaging protocols. Our study attempts to extract bone suppression features from the CT image, which is widely available in the radiation therapy department. There was only one existing study that used the CT-derived features to detect the bone structure in chest DR and achieved an SSIM =0.7000±0.0800 (32). Our study used the CT-derived features for bone suppression in DR and achieved an SSIM =0.8458±0.0317. This accuracy would be satisfying since the recovery of the detailed information for target localization in IGRT is not the first consideration. Besides, the contrast to noise ratio (CNR) (33) was also calculated to evaluate the visibility of lung tumors. After bone suppression, the average CNR value increased from 0.9282±0.4018 to 1.1239±0.5093 (P=0.002), suggesting the increased visibility of lung anatomy after bone suppression. Considering the scarcity of bone-free radiographs for model training, our study provides valuable insights for relevant studies in the future. It encourages scientists in the field of radiation therapy to leverage the feature expansion from CT images to train models for radiography enhancement and improve localization accuracy in the IGRT study. The CT-derived features also hold the promise to generate enhanced radiography for OBI radiography at different angles for different organs.

Despite the demonstrated feasibility of our proposed CCNN model, the accuracy of the synthesized bone-suppressed radiograph remains to be improved. For instance, lung nodules located in the edge regions are hard to be detected owing to the minimal difference in signal intensity between lung nodules and nearby peripheral lung tissues, which poses challenges in the process of DR simulation in our study. In the future, a transfer learning on a small number of bone-suppressed chest radiographs should be used to improve the detectability of nodules in the peripheral lung regions. Besides, the generalizability of our model can be influenced by heterogeneities in different X-ray projections, imaging protocols, and bone individualities (such as previous fractures and calcifications). To achieve broader model deployment, a case-by-case evaluation is warranted to ensure model generalizability across various clinical conditions. Lastly, since the external validation cohort is still limited in X-ray projections, a large-scale study using CXRs acquire from LINAC is required to further examine the effectiveness of the proposed method in the context of IGRT, which will be considered as an extension of this feasibility study.


Conclusions

We successfully demonstrated the feasibility of using CT-derived features for bone suppression in chest radiographs with the deep learning-based framework. Implementation of the feature expansion procedures resulted in a remarkable reinforcement of the model performance. The findings of this result provide valuable insights for future radiography enhancement study in IGRT, especially in view of the severe data scarcity of bone-free radiographs. For the application of target localization in IGRT, the clinical testing of the proposed method in the context of radiation therapy patients is a necessary procedure to move from theory into practice.


Acknowledgments

Funding: This research was partly supported by research grants of General Research Fund (GRF 15103520/20M), the University Grants Committee, and Health and Medical Research Fund (HMRF COVID190211, HMRF 07183266), the Food and Health Bureau, The Government of the Hong Kong Special Administrative Regions.


Footnote

Provenance and Peer Review: With the arrangement by the Guest Editors and the editorial office, this article has been reviewed by external peers.

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/qims-20-1230). The special issue “Artificial Intelligence for Image-guided Radiation Therapy” was commissioned by the editorial office without any funding or sponsorship. JC served as the unpaid Guest Editor of the special issue. JC is supported by the General Research Fund of Hong Kong and Health (GRF 15103520/20M) and Medical Research Fund of Hong Kong (HMRF 07183266, HMRF COVID190211). The authors have no other conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study used open-source data and ethics approval was not required for this type of study.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Ferlay J, Soerjomataram I, Dikshit R, Eser S, Mathers C, Rebelo M, Parkin DM, Forman D, Bray F. Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN 2012. Int J Cancer 2015;136:E359-86. [Crossref] [PubMed]
  2. Vansteenkiste J, Betticher D, Eberhardt WF, De Leyn P. Randomized controlled trial of resection versus radiotherapy after induction chemotherapy in stage IIIA-N2 non-small cell lung cancer. J Thorac Oncol 2007;2:684-5. [Crossref] [PubMed]
  3. Albain KS, Swann RS, Rusch VW, Turrisi AT, Shepherd FA, Smith C, Chen Y, Livingston RB, Feins RH, Gandara DR, Fry WA, Darling G, Johnson DH, Green MR, Miller RC, Ley J, Sause WT, Cox JD. Radiotherapy plus chemotherapy with or without surgical resection for stage III non-small-cell lung cancer: a phase III randomised controlled trial. Lancet 2009;374:379-86. [Crossref] [PubMed]
  4. Chen GT, Sharp GC, Mori S. A review of image-guided radiotherapy. Radiol Phys Technol 2009;2:1-12. [Crossref] [PubMed]
  5. Korreman S, Rasch C, McNair H, Verellen D, Oelfke U, Maingon P, Mijnheer B, Khoo V. The European Society of Therapeutic Radiology and Oncology-European Institute of Radiotherapy (ESTRO-EIR) report on 3D CT-based in-room image guidance systems: A practical and technical review and guide. Radiother Oncol 2010;94:129-44. [Crossref] [PubMed]
  6. Xiao Y. Image-Guided Radiation Therapy (IGRT): kV Imaging. In: Brady LW, Yaeger TE, editors. Encyclopedia of Radiation Oncology. Berlin, Heidelberg: Springer Berlin Heidelberg; 2013:343-51.
  7. Ren XC, Liu YE, Li J, Lin Q. Progress in image-guided radiotherapy for the treatment of non-small cell lung cancer. World J Radiol 2019;11:46-54. [Crossref] [PubMed]
  8. Arns A, Blessing M, Fleckenstein J, Stsepankou D, Boda-Heggemann J, Simeonova-Chergou A, Hesser J, Lohr F, Wenz F, Wertz H. Towards clinical implementation of ultrafast combined kV-MV CBCT for IGRT of lung cancer: Evaluation of registration accuracy based on phantom study. Strahlenther Onkol 2016;192:312-21. [Crossref] [PubMed]
  9. Austin JH, Romney B, Goldsmith L. Missed bronchogenic carcinoma: radiographic findings in 27 patients with a potentially resectable lesion evident in retrospect. Radiology 1992;182:115-22. [Crossref] [PubMed]
  10. Shah PK, Austin JH, White CS, Patel P, Haramati LB, Pearson GD, Shiau MC, Berkmen YM. Missed non-small cell lung cancer: radiographic findings of potentially resectable lesions evident only in retrospect. Radiology 2003;226:235-41. [Crossref] [PubMed]
  11. Stanley DN, McConnell KA, Kirby N, Gutiérrez AN, Papanikolaou N, Rasmussen K. Comparison of Initial Patient Setup Accuracy Between Surface Imaging and Three Point Localization: A Retrospective Analysis. J Appl Clin Med Phys 2017;18:58-61. [Crossref] [PubMed]
  12. Tanaka R, Sanada S, Sakuta K, Kawashima H. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT). Phys Med Biol 2015;60:N209-18.
  13. Yang W, Chen YY, Liu YB, Zhong LM, Qin GG, Lu ZT, Feng QJ, Chen WF. Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain. Med Image Anal 2017;35:421-33. [Crossref] [PubMed]
  14. Hogeweg L, Sánchez CI, van Ginneken B. Suppression of translucent elongated structures: applications in chest radiography. IEEE Trans Med Imaging 2013;32:2099-113. [Crossref] [PubMed]
  15. Lee JS, Wang JW, Wu HH, Yuan MZ. A nonparametric-based rib suppression method for chest radiographs. Comput Math Appl 2012;64:1390-9. [Crossref]
  16. Rasheed T, Ahmed B, Khan MA, Bettayeb M, Lee S, Kim TS. editors. Rib suppression in frontal chest radiographs: A blind source separation approach. 2007 9th International Symposium on Signal Processing and Its Applications; 2007: IEEE.
  17. Simkó G, Orbán G, Máday P, Horváth G. editors. Elimination of clavicle shadows to help automatic lung nodule detection on chest radiographs. 4th European Conference of the International Federation for Medical and Biological Engineering; 2009: Springer.
  18. Chen S, Suzuki K. Separation of bones from chest radiographs by means of anatomically specific multiple massive-training ANNs combined with total variation minimization smoothing. IEEE Trans Med Imaging 2014;33:246-57. [Crossref] [PubMed]
  19. Loog M, van Ginneken B, Schilham AM. Filter learning: application to suppression of bony structures from chest radiographs. Med Image Anal 2006;10:826-40. [Crossref] [PubMed]
  20. Suzuki K, Abe H, MacMahon H, Doi K. Image-processing technique for suppressing ribs in chest radiographs by means of massive training artificial neural network (MTANN). IEEE Trans Med Imaging 2006;25:406-16. [Crossref] [PubMed]
  21. Zarshenas A, Liu JC, Forti P, Suzuki K. Separation of bones from soft tissue in chest radiographs: Anatomy-specific orientation-frequency-specific deep neural network convolution. Med Phys 2019;46:2232-42. [Crossref] [PubMed]
  22. Oh DY, Yun ID. Learning Bone Suppression from Dual Energy Chest X-rays using Adversarial Networks. 2018.
  23. Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M. The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 2013;26:1045-57. [Crossref] [PubMed]
  24. Zhao B, James LP, Moskowitz CS, Guo P, Ginsberg MS, Lefkowitz RA, Qin Y, Riely GJ, Kris MG, Schwartz LH. Evaluating variability in tumor measurements from same-day repeat CT scans of patients with non–small cell lung cancer. Radiology 2009;252:263-72. [Crossref] [PubMed]
  25. Shiraishi J, Katsuragawa S, Ikezoe J, Matsumoto T, Kobayashi T, Komatsu K, Matsui M, Fujita H, Kodera Y, Doi K. Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists' detection of pulmonary nodules. AJR Am J Roentgenol 2000;174:71-4. [Crossref] [PubMed]
  26. Eslami M, Tabarestani S, Albarqouni S, Adeli E, Navab N, Adjouadi M. Image-to-Images Translation for Multi-Task Organ Segmentation and Bone Suppression in Chest X-Ray Radiography. IEEE Trans Med Imaging 2020;39:2553-65. [Crossref] [PubMed]
  27. Juhász S, Horváth Á, Nikházy L, Horváth G, Horváth Á. editors. Segmentation of Anatomical Structures on Chest Radiographs 2010; Berlin, Heidelberg: Springer Berlin Heidelberg.
  28. Hofmanninger J, Prayer F, Pan J, Röhrich S, Prosch H, Langs G. Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem. Eur Radiol Exp 2020;4:50. [Crossref] [PubMed]
  29. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Med Image Comput Comput Assist Interv 2015;9351:234-41.
  30. He KM, Zhang XY, Ren SQ, Sun J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. IEEE I Conf Comp Vis 2015:1026-34.
  31. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004;13:600-12. [Crossref] [PubMed]
  32. Gozes O, Greenspan H. Bone Structures Extraction and Enhancement in Chest Radiographs via CNN Trained on Synthetic Data. arXiv e-prints [Internet]. 2020 March 01, 2020:[arXiv:2003.10839 p.]. Available online: https://ui.adsabs.harvard.edu/abs/2020arXiv200310839G
  33. Timischl F. The contrast-to-noise ratio for image quality evaluation in scanning electron microscopy. Scanning 2015;37:54-62. [Crossref] [PubMed]
Cite this article as: Ren G, Xiao H, Lam SK, Yang D, Li T, Teng X, Qin J, Cai J. Deep learning-based bone suppression in chest radiographs using CT-derived features: a feasibility study. Quant Imaging Med Surg 2021;11(12):4807-4819. doi: 10.21037/qims-20-1230

Download Citation