APP下载

Robust restoration of low-dose cerebral perfusion CT images using NCS-Unet

2022-05-12KaiChenLiBoZhangJiaShunLiuYuanGaoZhanWuHaiChenZhuChangPingDuXiaoLiMaiChunFengYangYangChen

Nuclear Science and Techniques 2022年3期

Kai Chen · Li-Bo Zhang · Jia-Shun Liu · Yuan Gao· Zhan Wu ·Hai-Chen Zhu · Chang-Ping Du· Xiao-Li Mai · Chun-Feng Yang ·Yang Chen

Abstract Cerebral perfusion computed tomography(PCT)is an important imaging modality for evaluating cerebrovascular diseases and stroke symptoms. With widespread public concern about the potential cancer risks and health hazards associated with cumulative radiation exposure in PCT imaging, considerable research has been conducted to reduce the radiation dose in X-ray-based brain perfusion imaging. Reducing the dose of X-rays causes severe noise and artifacts in PCT images. To solve this problem, we propose a deep learning method called NCS-Unet. The exceptional characteristics of non-subsampled contourlet transform (NSCT) and the Sobel filter are introduced into NCS-Unet. NSCT decomposes the convolved features into high- and low-frequency components. The decomposed high-frequency component retains image edges, contrast imaging traces, and noise, whereas the low-frequency component retains the main image information. The Sobel filter extracts the contours of the original image and the imaging traces caused by the contrast agent decay. The extracted information is added to NCS-Unet to improve its performance in noise reduction and artifact removal. Qualitative and quantitative analyses demonstrated that the proposed NCS-Unet can improve the quality of low-dose cone-beam CT perfusion reconstruction images and the accuracy of perfusion parameter calculations.

Keywords Cerebral perfusion CT · Low-dose · Image denoising · Perfusion parameters

1 Introduction

Perfusion computed tomography (PCT) is commonly used for the diagnosis of stroke symptoms [1, 2]. The C-arm scanning mode is a reciprocal scan rather than a continuous unidirectional one,and the scan time for C-arm cone-beam CT (CBCT) is 5 s [3, 4]. With reciprocating scanning, an additional pause time of 1 s is required. The slower scanning speed of the C-arm increases the radiation dose and causes errors in the calculation of the perfusion parameters [5]. Appropriately reducing the scan time and dose of C-arm CBCT while providing image details close to normal dose conditions is a direction that must be investigated for perfusion imaging [6-8]. Previous studies,including those on iterative reconstruction [9-13], dictionary learning [14, 15], the low-rank characteristic of matrices [16-20], total variation regularization [21-23],and sparse sampling[24-27], were dedicated to improving the quality of low-dose PCT images and the calculation accuracy of perfusion parameters. Liu et al. proposed a dynamic rollback reconstruction method based on CBCT[28]. This method improved the temporal resolution by increasing the number of sampling points. Among these methods, the classical approach involves directly postprocessing the reconstructed PCT images.For example,Ma et al. developed an algorithm called MAP-ndiNLM, an iterative image-reconstruction algorithm based on the maximum a posteriori principle, to produce a clinically acceptable cerebral PCT with a low-dose PCT image [29].Mendrik et al. proposed a time-intensity distribution similarity bilateral filter to reduce noise in four-dimensional PCT scans and retain the time-intensity distribution [30].Ma et al.proposed an innovative method that uses normaldose scan information as a priori information to induce signal recovery in low-dose PCT images [31]. Supanich et al. used HighlY constrained back PRojection(HYPR)image reconstruction to reduce the dose and reconstruct images with low image noise[32].Huang et al.proposed a threshold selection method to optimize the energy thresholds based on the target component coefficients line by line and then obtained the overall optimal energy threshold using frequency voting, which can obtain better quality images in k-edge imaging [33]. Another solution is to use deconvolution to compute the perfusion parameters directly. Boutelier et al. introduced a delay-insensitive probability method for hemodynamic parameter estimation, theoretical residual functions, and concentration-time curves [34]. He et al. introduced a spatiotemporal deconvolution method to improve the characterization of residual functions and quantify the perfusion parameters [35]. CT high-resolution imaging has an irreplaceable role in other areas as well.Sun et al.quantified the pore throat,pore size distribution, and mineral composition of low-permeability uranium-bearing sandstones using high-pressure mercury compression, nuclear magnetic resonance, X-ray diffraction, and wavelength-dispersive X-ray fluorescence [36].

Recently,deep learning has demonstrated a competitive performance in medical image processing [37-44]. The image produced by the scanner often suffers from artifacts and noise because of sampling and dose limitations or physical defects, which may adversely affect diagnostic performance. Deep-learning-based methods can overcome the problem of accurate noise detection in the image domain when a sufficient number of good samples are provided.For instance,Hu et al.proposed a novel low-dose CT (LDCT) noise reduction method and mapped LDCT images from corresponding normal-dose images in a sliceby-slice manner [45]. Kang et al. proposed an optimized convolutional neural network (CNN) structure for CT image denoising. Based on the ability of a directional wavelet transform to detect the directional component of noise,they constructed a deep CNN network in the wavelet domain[46].The DnCNN model proposed by Zhang et al.utilized residual learning and batch normalization to accelerate the training process and improve denoising performance [47]. Ma et al. adopted a CNN focusing on residual density, called AttRDN, for LDCT sinogram denoising to prevent the loss of detailed information,which has potential for clinical applications [48]. Yang et al.incorporated the imaging physics of CBCT into a residual convolutional neural network and proposed a new end-toend deep-learning-based slice reconstruction method [49].

An overwhelming challenge that CT perfusion imaging must address is the use of LDCT perfusion data to calculate perfusion parameters [50]. To address this challenge, we propose a deep learning method called NCS-Unet to postprocess low-dose PCT images and calculate the perfusion parameters. Non-subsampled contourlet transform (NSCT)is used to process the feature maps after convolution.Additionally, the Sobel filter in our NCS-Unet extracts the edge information of the original image and the imaging traces of the contrast agent. The proposed NCS-Unet was compared with the BM3D denoising algorithm [51], discriminative feature representation (DFR) sparse dictionary learning method [52], and REDCNN image domain postprocessing algorithm [45] for low-dose PCT restoration.The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were used to evaluate restoration performance. Singular value decomposition (SVD) was used to calculate the perfusion parameters, including cerebral blood flow(CBF),cerebral blood volume(CBV),and mean transit time (MTT) processed using four different algorithms.

2 Method

As shown in Fig. 1,the channels of the input image is increased to 32 after two 3 × 3 convolutions, and then NSCT is performed on the features. NCS-Unet consists of convolutional pre-processing, non-subsampling pyramidal(NSP) decomposition, Sobel edge extraction, feature extraction blocks, and NSP merging. The high- and lowfrequency information are fused with the gradients of the original image. Finally, the fused information is processed respectively using three multiscale feature extraction blocks to obtain the high-quality PCT image.

2.1 Non-subsampled contourlet transform (NSCT)and Sobel filter

Wavelet transform [53-55] is a more sparse representation of one-dimensional and piecewise differentiable functions than Fourier transform [56-58]. However, it is difficult to extend the good properties from one-dimensional signals to two-dimensional or even higher-dimensional signals. Because the high-dimensional wavelet transform is based on the one-dimensional wavelet tensor product of the one-dimensional wavelet basis, each dimension of the signal is downsampled by the same size.The isotropy of the conventional wavelet transform sampling operation is limited to a few directions; therefore,there is no translational invariance. It is not sufficient to represent edge orientation in two-dimensional or higher dimensional images. Vetterli et al. proposed contourlet transformation [59]. Contourlet transformation is an efficient method of representing two-dimensional images. In contrast to wavelet transform[60-64],contourlet transform uses an element called a base structure to fit the original image, which is similar to a contour segment. The support interval of the base is a rectangular structure whose aspect ratio can vary with the scale, and this structure also has directionality and anisotropy. Zhou et al. proposed NSCT[65] to solve the problem of contourlet transform not having translation invariance. Contourlet transform is not translation invariant because of the upsampling structure and the corresponding downsampling structure in the Laplace pyramid and directional filter bank. To preserve the multiscale property, Laplace transform is replaced by the NSP decomposition structure. The non-downsampling directional filter bank (NSDFB) is added to preserve directionality.

When the image is decomposed using an L-layer nondown-sampling pyramid to achieve multiscale decomposition, L+1 subband maps of the same size as the original image can be obtained.The NSP decomposition consists of a low-pass and a high-pass filters.If it is necessary to fully restore the decomposed subband map when using a twochannel filter bank to decompose the image. Each filter should satisfy the following relationship:

where H0(z)is a low-pass decomposition filter,H1(z)is the high-pass decomposition filter, G0(z) is a low-pass reconstruction filter, and G1(z) is the high-pass reconstruction filter.

Fig. 1 (Color online) Low-dose PCT post-processing framework called NCS-Unet that introduces the Sobel filter and NSCT

The frequency information is decomposed into 2ksubbands by the directional filter bank using a k-level binary tree. The diagram in each direction, matched with the size of the original image, represents only the information within its direction.When the high-frequency subband map is decomposed in the k-layer direction, a 2kdirectional subband diagram of the same size as the original image can be obtained simultaneously. The L-layer NSCT of the image yields a low-frequency subband map and ∑Jj=1 2kidirectional subbands, where j is the number of decomposition layers, and kiis the number of directional decompositions. The input image is decomposed into two independent components corresponding to the high- and low-frequency information. Subsequently, the non-downsampled sampling direction filter bank decomposes the high-frequency subband map into several directional subband diagrams, and the low-frequency subband map then repeats the same operations. NSCT is a fast image transformation with multiscale,multidirectional,and translation invariance that the wavelet transform does not have.

NSCT is performed on the features after convolution.After decomposing the extracted features into high- and low-frequency features, two subnetworks are used for independent training. Because only one NSP decomposition is performed, the low-frequency image retains more complete details. The high-frequency features contain noise and edge information, and a Sobel filter is employed to increase the weight of the edge information. The Sobel filter is a typical linear filter used for edge detection. It is based on two 3 × 3 kernels that are sensitive to edges in the horizontal and vertical directions, respectively. The Sobel filter extracts the gradient information of an image in the horizontal and vertical directions. The Sobel filter calculates the difference between the pixel values in the horizontal and vertical directions to obtain an approximate value of the image gradient.

As shown in Fig. 2,the vertical and horizontal gradients of the input image are extracted by the Sobel filter. In addition to extracting the contour of the original image,the merged image gradient extracts imaging traces caused by the attenuation of the contrast agent. Image gradients are fused in shallow features to increase the learning information of image contours and contrast agents. This information is added to improve the performance of NCS-Unet in removing noise and artifacts during the training phase of the network.

2.2 Multiscale feature extraction blocks

The multiscale feature extraction block shown in Fig. 3 extracts details of both high- and low-frequency features.The parameters of NCS-Unet and multiscale feature extraction are listed in Tables 1 and 2. The multiscale feature extraction block consists of convolutional layers,ReLU activation function layers, max pooling layers, and upsampling layers. The size of the convolution kernel of a convolutional layer is 3 × 3, and the downsampling layer consists of a convolutional layer with a step size of 2. The transposed convolutional layer with a step size of two constitutes the upsampling layer in the network. Each multiscale feature extraction block adopts the encoderdecode architecture of a typical Unet. The encoder compresses the features of the input image while extracting redundant information and the features of the image. The decoder restores the original resolution of the image. To better share image detail information with other layers in NCS-Unet,skip connections are added between features of the same resolution during upsampling and downsampling.

During encoding, the reduction in resolution facilitates the preservation of the main features from the image,while redundant and high-frequency information are ignored.The features extracted during encoding and decoding are merged. The trained NCS-Unet determines the respective weights of the image details and noise reduction function.The loss function used by the NCS-Unet during training is expressed as follows:

where y(i, j) and x(i, j) denote the simulated PCT and corresponding reference images,respectively.M and N are the height and width of the image, respectively.

3 Experiments and results

3.1 Experimental data and hyperparameters

In this study, the dataset for training and testing contained CT images provided by United Imaging.The dataset contained normal-dose CT images of 23 patients,and each patient had eight tomograms.Each tomogram contained 30 consecutive reconstructed images to represent the entire process from contrast injection to contrast outflow. The normal-dose scan was performed with the following protocol:250 mA,80 kVp,slice thickness of 8.0 mm.A conebeam imaging geometry was simulated to obtain projections of the dataset. Specifically, the source-to-detector distance and source-to-scanned object were 1000.00 and 870.00 mm, respectively.

In addition, we added the Poisson noise model, formulated in Eq. (3), to the projection data of the dataset and then reconstructed the simulated LDCT image with onetenth of the normal dose.

Fig. 2 (a1)-(a2) are original images; (b1)-(b2) Sobel vertical gradients of original images; (c1)-(c2) Sobel horizontal gradients of original images; (d1)-(d2) Sobel gradients of original images

Fig. 3 (Color online) Multiscale feature extraction block

Table 1 Parameters of NCSUnet

Table 2 Multiscale feature extraction block architecture

Fig.4 Denoising results of different methods.(a1)-(a3)normal-dose PCT images, (b1)-(b3) low-dose PCT images, (c1)-(c3) results processed using DFR,(d1)-(d3)results processed using BM3D,(e1)-(e3)results processed using REDCNN,and(f1)-(f3)results processed using NCS-Unet

The size of the detector parameters was 512 mm ×512 mm, and the size of the detection element was 0.5 mm. Forward projections were collected from 720 views with a 0.5-degree angular interval.All reconstructed CT images had 512 × 512 pixels, with a pixel size of 0.156 mm.PCT images of the patient were used as the test set for the ablation experiment. The fivefold cross-validation scheme was used in a comparison experiment to demonstrate the excellent restoration ability of NCS-Unet on perfused images. The classification of the training and test sets was randomized. CT images from the training set were segmented, rotated, and shifted to increase the amount of training data.The size of the original CT image was resized to 128×128.We used the Adam algorithm to optimize the loss function. The quadratic gradient correction was introduced,and the default parameters were used.Beta1 was set to 0.9, and beta2 was set to 0.999. To accelerate the convergence of the network, we set the initial value of the learning rate to 1e-3. The total number of training epochs was 40. As the number of training epochs increased, the learning rate decreased. All neural network methods were implemented using the Tensorflow framework. The computer configurations were as follows:Intel(R) Core (TM) i7-4790K 4.00 GHz CPU, NVIDIA GTX 2080Ti GPU with 11 GB memory. Ablation experiments and cross-validation were performed to verify the performance of NCS-Unet. We also compared the results of the different methods in terms of subjective evaluation and objective metrics.The objective metrics used included the PSNR and SS. The PSNR is defined as follows:

Fig. 5 Difference between the normal-dose image and the image processed using the comparison method in this study. (a1)-(a3)Differences between the normal-dose and low-dose images,(b1)-(b3)differences between the normal-dose image and image processed using the DFR algorithm, (c1)-(c3) differences between the normaldose image and image processed using the BM3D algorithm, (d1)-(d3) differences between the normal-dose image and the image processed using REDCNN, and (e1)-(e3) differences between the normal-dose image and image processed using NCS-Unet

Fig. 6 (Color online) Single-value decomposition of the PMA software used to calculate the perfusion parameters for the results processed using different methods. Columns 1-6 show the perfusion parameter maps of the CBF, CBV, and MTT calculated from the normal-dose, low-dose, DFRC-processed, BM3DC-processed,REDCNNC-processed, and NCS-Unet-processed images,respectively

Table 3 Statistical properties (mean ± standard deviation) of different algorithms in the comparison experiments

3.2 Comparison of restoration results

In this study, images and the zoomed details of the region of interest (ROI) restored using different methods were visually compared with normal-dose PCT images.The comparison methods included the BM3D denoising algorithm, DFR sparse dictionary learning method, and REDCNN postprocessing algorithm based on deep learning. These three methods used the same training dataset and test set as the NCS-Unet network in this study. The fivefold cross-validation scheme was used in a comparison experiment. The data from the 23 patients were divided into five parts, three of which had data from five patients each, and the other two had data from four patients each.One of the five parts was used as the test set and four parts were used as the training set. This process was repeated five times until each part in the entire dataset was used for a single test. The classification of the training and test sets was randomized. The aforementioned three methods used the same training dataset and test set as the NCS-Unet network in this study.

The results of the denoising experiments are shown in Fig. 4. The denoising results of the three methods wereselected from the same test data for comparison, and the image display window width was [0,80] HU. Figure 4a1-a3 show normal-dose PCT images. The second column(b1)-(b3) are the low-dose PCT images, which were equivalent to 10% of the normal dose; the results of dictionary learning DFR processing are shown in (c1)-(c3);the results of the BM3D algorithm processing are shown in(d1)-(d3);(e1)-(e3)are the results of the REDCNN image post-processing algorithm; (f1)-(f3) are the results of the NCS-Unet processing in this study. We observed that the image details of low-dose PCT images were overwhelmed by a significant amount of noise, including the imaging of the brain structure and injected contrast agent. Some artifacts were also caused by forward and reverse projection.Noise and artifacts can affect the diagnosis by clinicians.Because PCT images of the normal dose have some noise,the denoising effect of dictionary learning is poor, and the detailed recovery of the contrast agent is not satisfactory.Compared with algorithms such as DFR dictionary learning and the BM3D algorithm, the processing performance of deep learning is significantly better. Both the image postprocessing algorithms REDCNN and NCS-Unet proposed in this paper can reduce most of the noise and remove artifacts,but the edge details of REDCNN are insufficient,and part of the contrast agent information is blurred.NCSUnet adds high- and low-frequency decomposition and edge information extracted by the Sobel filter. During training phase, the image details were separated from thenoise and artifacts. Compared with REDCNN, the results processed using NCS-Unet retained more image details.The details and the contrast agent part of the original normal-dose CT image can be observed very distinctly in the part of Fig. 4 indicated by the red arrow. After adding noise, the simulated low-dose image could not clearly discriminate the traces of the contrast agent and noise.The image for DFR dictionary learning was a normal-dose CT image with partial noise, making it impossible to process the simulated LDCT image properly.The results of BM3D denoising were clear,but the image details were somewhat distorted, which differed significantly from the reference images. The REDCNN image post-processing method removed noise and artifacts better.However,image details were blurred compared with NCS-Unet. The imaging traces of the contrast agent in the blood vessels and other details of the brain were not as pronounced as the results of the NCS-Unet network. The differences between the images of the normal dose and those processed by the comparison method in this study are shown in Fig. 5.

Table 4 Average perfusion parameters of different algorithms in the comparison experiments

Table 5 RMSE of perfusion parameters of different algorithms in the comparison experiments

Table 6 MAPE of perfusion parameters of different algorithms (%)in the comparison experiments

To further compare the restoration results of different methods, we used singular value decomposition using the PMA software to calculate the perfusion parameter maps,including CBF, CBV, and MTT for different methods in this study.The perfusion parameters maps calculated using different methods are shown in Fig. 6. The first, second,and third rows of maps are the CBF, CBV, and MTT,respectively, calculated using different methods. These three perfusion parameters are commonly used in clinical diagnosis. The influence of noise on the perfusion parameters was apparent. As shown in Fig. 6,most of the details of CBF, CBV, and MTT maps calculated from the simulated low-dose images outside the vessels were covered by noise.Dictionary learning DFR does not separate the image from noise and artifacts during post-processing; therefore,the perfusion parameter map was masked by noise. As indicated by the white arrows in the white rectangle in Fig. 6 show, although BM3D and REDCNN had better post-processing performance and reduced most of the noise, they erased part of the original image information.The perfusion parameter maps calculated from the NCSUnet-processed images were closer to the image characteristics of the normal-dose perfusion images.

Fig. 7 Denoising results of different algorithms in ablation experiments. (a1)-(a2) normal-dose CT images, (b1)-(b2) LDCT images,(c1)-(c2) images processed using NCS-Unet while simultaneously removing the NSCT and Sobel filter, (d1)-(d2) images processed using NCS-Unet after removal of NSCT, (e1)-(e2) are images processed by NCS-Unet after removal of the Sobel filter,and(f1)-(f2)images processed using NCS-Unet

Table 7 Statistical properties (mean ± standard deviation) of different algorithms in the ablation experiments

In addition to the subjective evaluation of the results of different algorithms,we also analyzed the PSNR and SSIM of the restoration results using dictionary learning, BM3D algorithm,REDCNN image post-processing algorithm,and NCS-Unet on the data provided by United Image. The mean and standard deviation of the PSNR and SSIM of the test set were analyzed and calculated.The results are listed in Table 3. For low-dose reconstructed images, noise and artifacts resulted in a low PSNR and SSIM. After processing using different algorithms, both the PSNR and SSIM were improved to some degree.A higher PSNR was obtained for the BM3D and REDCNN image post-processing algorithms, but some image details were blurred while removing noise and artifacts. This resulted in a low SSIM for the BM3D and REDCNN image postprocessing algorithms. NCS-Unet improved the SSIM and PSNR,which demonstrated that the proposed NCS-Unet retained more edge and trace information of the contrast agent during training to improve imaging quality.

In addition to the direct subjective evaluation of the image results restored by each algorithm and the comparison of objective indicators, this paper also provides statistics and analysis of the perfusion parameters calculated by each algorithm.We compared the mean perfusion parameter values, root mean square error (RMSE), and mean absolute percentage error (MAPE) of the perfusion parameters for different methods. As shown in Table 4,under the interference of noise and artifacts, the CBF and CBV were considerably improved.The MTT was reduced,which was significantly different from the mean perfusion parameter values of normal-dose CT (NDCT)images. The CBF and CBV of the LDCT images after the DFR dictionary learning process significantly improved. Correspondingly, the BM3D algorithm resulted in a decrease in CBF and CBV parameters. From the statistical results of the different methods, the CBF and CBV of NCS-Unetprocessed images were closer to the results of NDCT images, except for the MTT. When calculating the perfusion parameters of an image processed using NCS-Unet,the algorithm considered it to be closer to a normal-dose CT image and therefore used a closer color fill.

Tables 5 and 6 compare the RMSE and MAPE of the perfusion parameters of different methods.The comparison between the RMSE and MAPE demonstrated that the results restored by NCS-Unet were closer to the normaldose CT images.The reason for the high RMSE and MAPE of DFR dictionary learning was its poor processing power,which does not adequately reduce noise or eliminate artifacts.The BM3D algorithm blurred the image, resulting in perfusion parameters that differed dramatically from those calculated from the original reference image.

Fig. 8 Differences between normal-dose images and images processed using the method in the ablation experiment. Columns 1-5 show the difference between the normal- and low-dose images, the image processed using NCS-Unet simultaneously removing the NSCT and Sobel filter, the image processed usisng NCS-Unet after removal of NSCT,image processed using NCS-Unet after removal of the Sobel filter, and images processed using NCS-Unet

Table 8 RMSE of perfusion parameters processed using different methods in the ablation experiments

Table 9 MAPE of perfusion parameters calculated by different methods in the ablation experiments (%)

3.3 Ablation experiment results

Because NCS-Unet introduces the NSCT and Sobel filters,this section compares the restoration performance of the network after removing the NSCT and Sobel filters one at a time with the complete NCS-Unet restoration performance. After the NSCT filter was removed, NCS-Unet retained only the edge extraction of the Sobel filter. Correspondingly, NCS-Unet separated only high- and lowfrequency feature images after removing the Sobel filter.The ablation experiment used the same training and test sets with the same loss function.The dataset contained the normal-dose CT images of 23 patients, and data from 22 patients were used as the training set, and data from one patient were used as the test set. The Adam algorithm was used to optimize the loss function for 20 training epochs.The results of the ablation experiment are shown in Fig. 7.NCS-Unet with the NSCT filter removed restored the image with noise in the brain background region. The red arrow in the orange rectangle of Fig. 7 indicates that the details of the image were blurred compared with the restoration results of NCS-Unet.Several noise and contrast traces were missed, and the image details were not fully recovered. To demonstrate the contribution of the NSCT and Sobel filter,we calculated objective metrics to evaluate the results of the different methods.The results are listed in Table 7. From the calculation results of PSNR and SSIM,we concluded that the restoration results of adding only NSCT decomposition are higher than those obtained using only the Sobel filter alone, but the difference is not significant. Compared with using both simultaneously, the restoration results of the NCS-Unet decrease significantly reduced after removing the NSCT or Sobel filter. This strongly indicates that both the NSCT decomposition and Sobel filter contribute to the post-processing of the image.

Fig. 9 (Color online) Single-value decomposition of the PMA software was used to calculate the perfusion parameters for the results of different algorithms. Columns 1-6 show the perfusion parameter maps of CBF,CBV,and MTT calculated from the normaldose image, low-dose image, and image processed using NCS-Unet simultaneously removing the NSCT and Sobel filter,image processed using NCS-Unet after removal of NSCT,image processed using NCSUnet after removal of the Sobel filter, and images processed using NCS-Unet

The perfusion parameter maps calculated from the images processed using different algorithms in the ablation experiment are shown in Fig. 8. AS the red arrow in the yellow rectangle of Fig. 8 indicates, the NCS-Unet-processed perfusion parameter maps and the normal-dose perfusion maps had more similar image characteristics in terms of the details of some contrast agent traces.To better visualize the performance of NCS-Unet,we also compared the RMSE and MAPE of the perfusion parameters of different methods. As listed in Tables 8 and 9, we can conclude that the RMSE and MAPE of the CBF and CBV calculated from the NCS-Unet-processed maps were significantly lower, except for the insignificant improvement in MTT.This implied that the perfusion parameter maps of the NCS-Unet image were closer to those calculated from the normal-dose CT image (Fig. 9).

4 Discussion and summary

In this paper, a perfusion CT image post-processing method based on deep learning,NCS-Unet,to restore lowdose perfusion CT images is proposed. The unique characteristics of the Sobel filter and NSCT were introduced into the proposed NCS-Unet. It extracted the high- and low-frequency information of the features by decomposing convolutionally processed features using the introduced NSCT. Sobel gradient information of the original image was added to better preserve the image edge and contrast agent traces. Denoising and ablation experiments were conducted to validate the performance of NCS-Unet. The results of the denoising experiment indicated that NCSUnet had better image contour recovery ability than other methods in terms of subjective visual discrimination and could more clearly distinguish the contrast agent traces from noise. In the denoising experiments, the PSNR and SSIM values of NCS-Unet on the test set were 39.20 ± 0.68 and 0.8212 ± 0.02822, respectively, which were better than those of the four methods compared in this study. The qualitative and quantitative analyses of the restoration and estimation of perfusion parameters indicated that the performance of the proposed NCS-Unet was superior to the other methods referenced in this paper. To verify the importance of the introduced NSCT and Sobel filters for NCS-Unet, we conducted ablation experiments.Compared with the restoration performances of the methods in the ablation experiments, the PSNR and SSIM values of the complete NCS-Unet were the highest. The RMSE and MAPE of the perfusion parameters demonstrated the irreplaceable role of both the Sobel filter and NSCT for NCS-Unet.

There are several limitations in the current work which warrant further investigation.First,this paper only provides a simple and condensed description of perfusion imaging.We ignored its complex principles and many other elements and focused only on this part of imaging. Second,owing to the lack of equipment, all the data used in this study were simulation data based on perfusion CT data provided by United Imaging. Although the scanning protocol of C-arm CBCT was referenced, we hope to collect actual data and conduct further research in the future.Third, the architecture of the NCS-Unet used in this study is primarily based on the Unet architecture, which is relatively simple. Many novel technologies, such as the attention mechanism and deep adversarial networks, are likely to improve the performance of neural networks to a certain extent. In the future, the network architecture can be further modified to improve the performance of neural networks.

In conclusion, a low-dose perfusion CT image postprocessing method, NCS-Unet, is proposed and compared with the restoration results of the BM3D denoising algorithm, DFR sparse dictionary learning method, and REDCNN post-processing. The results of the comparison experiments demonstrated that perfusion CT images processed using the NCS-Unet proposed in this paper had higher imaging quality, and the restored perfusion maps were closer to the characteristics of normal-dose perfusion images. The results of the ablation experiments demonstrated that the NCST and Sobel filter collectively improved the processing performance of the NCS-Unet.

Authors’ contribution All authors contributed to the study’s conception and design.Material preparation,data collection and analysis were assisted by Hai-Chen Zhu, Zhan Wu, and Chang-Ping Du. The coding of deep learning methods in the manuscript was assisted by Jia-Shun Liu and Yuan Gao, the first draft of the manuscript was written by Kai Chen and guided by Yang Chen, Xiao-Li Mai, Li-Po Zhao, and Chun-Feng Yang, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.