Original Article
Shading correction for volumetric CT using deep convolutional neural network and adaptive filter
Abstract
Background: Shading artifact may lead to CT number inaccuracy, image contrast loss and spatial non-uniformity (SNU), which is considered as one of the fundamental limitations for volumetric CT (VCT) application. To correct the shading artifact, a novel approach is proposed using deep learning and an adaptive filter (AF).
Methods: Firstly, we apply the deep convolutional neural network (DCNN) to train a human tissue segmentation model. The trained model is implemented to segment the tissue. According to the general knowledge that CT number of the same human tissue is approximately the same, a template image without shading artifact can be generated using segmentation and then each tissue is filled with the corresponding CT number of a specific tissue. By subtracting the template image from the uncorrected image, the residual image with image detail and shading artifact are generated. The shading artifact is mainly low-frequency signals while the image details are mainly high-frequency signals. Therefore, we proposed an adaptive filter to separate the shading artifact and image details accurately. Finally, the estimated shading artifacts are deleted from the raw image to generate the corrected image.
Results: On the Catphan©504 study, the error of CT number in the corrected image’s region of interest (ROI) is reduced from 109 to 11 HU, and the image contrast is increased by a factor of 1.46 on average. On the patient pelvis study, the error of CT number in selected ROI is reduced from 198 to 10 HU. The SNU calculated from the ROIs decreases from 24% to 9% after correction.
Conclusions: The proposed shading correction method using DCNN and AF may find a useful application in future clinical practice.
Methods: Firstly, we apply the deep convolutional neural network (DCNN) to train a human tissue segmentation model. The trained model is implemented to segment the tissue. According to the general knowledge that CT number of the same human tissue is approximately the same, a template image without shading artifact can be generated using segmentation and then each tissue is filled with the corresponding CT number of a specific tissue. By subtracting the template image from the uncorrected image, the residual image with image detail and shading artifact are generated. The shading artifact is mainly low-frequency signals while the image details are mainly high-frequency signals. Therefore, we proposed an adaptive filter to separate the shading artifact and image details accurately. Finally, the estimated shading artifacts are deleted from the raw image to generate the corrected image.
Results: On the Catphan©504 study, the error of CT number in the corrected image’s region of interest (ROI) is reduced from 109 to 11 HU, and the image contrast is increased by a factor of 1.46 on average. On the patient pelvis study, the error of CT number in selected ROI is reduced from 198 to 10 HU. The SNU calculated from the ROIs decreases from 24% to 9% after correction.
Conclusions: The proposed shading correction method using DCNN and AF may find a useful application in future clinical practice.