A deep learning reconstruction framework for X-ray computed tomography with incomplete data


Autoři: Jianbing Dong aff001;  Jian Fu aff001;  Zhao He aff001
Působiště autorů: Research Center of Digital Radiation Imaging and Biomedical imaging, Beijing University of Aeronautics and Astronautics, Beijing, China aff001;  School of Mechanical Engineering and Automation, Beijing University of Aeronautics and Astronautics, Beijing, China aff002;  Jiangxi Research Institute, Beijing University of Aeronautics and Astronautics, Nanchang, China aff003
Vyšlo v časopise: PLoS ONE 14(11)
Kategorie: Research Article
doi: 10.1371/journal.pone.0224426

Souhrn

As a powerful imaging tool, X-ray computed tomography (CT) allows us to investigate the inner structures of specimens in a quantitative and nondestructive way. Limited by the implementation conditions, CT with incomplete projections happens quite often. Conventional reconstruction algorithms are not easy to deal with incomplete data. They are usually involved with complicated parameter selection operations, also sensitive to noise and time-consuming. In this paper, we reported a deep learning reconstruction framework for incomplete data CT. It is the tight coupling of the deep learning U-net and CT reconstruction algorithm in the domain of the projection sinograms. The U-net estimated results are not the artifacts caused by the incomplete data, but the complete projection sinograms. After training, this framework is determined and can reconstruct the final high quality CT image from a given incomplete projection sinogram. Taking the sparse-view and limited-angle CT as examples, this framework has been validated and demonstrated with synthetic and experimental data sets. Embedded with CT reconstruction, this framework naturally encapsulates the physical imaging model of CT systems and is easy to be extended to deal with other challenges. This work is helpful to push the application of the state-of-the-art deep learning techniques in the field of CT.

Klíčová slova:

Algorithms – Computed axial tomography – Convolution – Deep learning – Image processing – Imaging techniques – Neural networks – Signal to noise ratio


Zdroje

1. Wang T, Nakamoto K, Zhang H, Liu H. Reweighted anisotropic total variation minimization for limited-angle CT reconstruction. IEEE Trans Nucl Sci. 2017;64(10):2742–2760. doi: 10.1109/TNS.2017.2750199

2. Hu Z, Gao J, Zhang N, Yang Y, Liu X, Zheng H, Liang D. An improved statistical iterative algorithm for sparse-view and limited-angle CT image reconstruction. Sci Rep. 2017;7:10747. doi: 10.1038/s41598-017-11222-z 28878293

3. Sidky EY, Kao CM, Pan X. Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT. J Xray Sci Technol. 2009;14(2):119–139.

4. Chen G, Tang J, leng S. Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med Phy. 2008;35(2):660–663.

5. Sidky EY, Pan X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phy Med Biol. 2008;53(17):4777–4807. doi: 10.1088/0031-9155/53/17/021

6. Gordon R, Bender R, Herman GT. Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography. J theor Biol. 1970;29(3):471–481. doi: 10.1016/0022-5193(70)90109-8 5492997

7. Andersen AH, Kak A. Simultaneous algebraic reconstruction technique (SART): a superior implementation of the art algorithm. Ultrason Imaging. 1984;6(1):81–94. doi: 10.1177/016173468400600107 6548059

8. Vardi Y, Shepp LA, Kaufman L. A statistical model for positron emission tomography. J Am Stat Assoc. 1985;80(389):8–20. doi: 10.2307/2288037

9. Fessler JA, Rogers WL. Spatial resolution properties of penalized-likelihood image reconstruction: space-invariant tomographs. IEEE Tran Image Process. 1996;5(9):1346–1358. doi: 10.1109/83.535846

10. Sidky EY, Pan X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys Med Biol. 2008;53(17):4777. doi: 10.1088/0031-9155/53/17/021 18701771

11. Luo X, Yu W, Wang C. An image reconstruction method based on total variation and wavelet tight frame for limited-angle CT. IEEE Access. 2018;6:1461–1470.

12. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. In: NIPS; 2012. p. 1097-1105.

13. Sermanet P, Eigen D, Zhang X, Mathieu M, Fergus R. OverFeat: integrated recognition, localization and detection using convolutional networks. 2013. arXiv:1312.6229v4.

14. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: MICCAI 2015; 2015. p. 234-241.

15. Burger HC, Schuler CJ, Harmeling S. Image denoising: can plain neural networks compete with BM3D?. In: CVPR 2012; 2012. p. 2392-2399.

16. Dong C, Deng Y, Loy CC, Tang X. Compression artifacts reduction by a deep convolutional network. In: ICCV 2015; 2015. p. 576-584.

17. Guo J, Chao H. Building dual-domain representations for compression artifacts reduction. In: ECCV 2016; 2016. p. 628-644.

18. Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. In: ECCV 2014; 2014. p. 818-833.

19. Bengio Y, Simard P, Frasconi P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw. 1994;5(2):157–166. doi: 10.1109/72.279181

20. Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. J Mach Learn Res. 2010;9:249–256.

21. Saxe AM, McClelland JL, Ganguli S. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In: ICLR; 2014. p. 1-22.

22. LeCun Y, Bottou L, Orr GB, Muller KR. Efficient backProp. Neural Networks Tricks of the Trade. 1998;1524(1):9–50. doi: 10.1007/3-540-49430-8_2

23. He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: surpassing human-level performance on imageNet classification. ICCV 2015. 2015; p. 1026-1034.

24. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. 2015. arXiv:1502.03167v3.

25. He K, Sun J. Convolutional neural networks at constrained time cost. In: CVPR 2015; 2015. p. 5353-5360.

26. Gregor K, LeCun Y. Learning fast approximations of sparse coding. In: ICML 2010; 2010. p. 399-406.

27. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: CVPR 2016; 2016. p. 770-778.

28. Cierniak R. A new approach to image reconstruction from projections using a recurrent neural network. Int J Appl Math Comput Sci. 2008;18(2):147–157. doi: 10.2478/v10006-008-0014-y

29. Wurfl T, Ghesu FC, Christlein V, Maier A. Deep learning computed tomography. In: MICCAI 2016; 2016.432-440.

30. Han Y, Yoo J, Ye JC. Deep residual learning for compressed sensing CT reconstruction via persistent homology analysis. 2016. arXiv:1611.06391v2.

31. Gu J, Ye JC. Multi-scale wavelet domain residual learning for limited-angle CT reconstruction. 2017. arXiv:1703.01382v1.

32. Jin KH, McCann MT, Froustey E, Unser M. Deep convolutional neural network for inverse problems in imaging. IEEE Tran Image Process. 2016;26(9):4509–4522. doi: 10.1109/TIP.2017.2713099

33. Pelt DM, Sethian JA. A mixed-scale dense convolutional neural network for image analysis. PNAS. 2018;115(2):254–259j. doi: 10.1073/pnas.1715832114 29279403

34. Dozat T. Incorporating nesterov momentum into adam. In: ICLR 2016; 2016.


Článek vyšel v časopise

PLOS One


2019 Číslo 11