Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 2
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

The paper is focused on automatic segmentation task of bone structures out of CT data series of pelvic region. The authors trained and compared four different models of deep neural networks (FCN, PSPNet, U-net and Segnet) to perform the segmentation task of three following classes: background, patient outline and bones. The mean and class-wise Intersection over Union (IoU), Dice coefficient and pixel accuracy measures were evaluated for each network outcome. In the initial phase all of the networks were trained for 10 epochs. The most exact segmentation results were obtained with the use of U-net model, with mean IoU value equal to 93.2%. The results where further outperformed with the U-net model modification with ResNet50 model used as the encoder, trained by 30 epochs, which obtained following result: mIoU measure – 96.92%, “bone” class IoU – 92.87%, mDice coefficient – 98.41%, mDice coefficient for “bone” – 96.31%, mAccuracy – 99.85% and Accuracy for “bone” class – 99.92%.
Go to article

Bibliography

  1.  E. Stindel, et al., “Bone morphing: 3D morphological data for total knee arthroplasty” Comput. Aided Surg. 7(3), 156–168 (2002), doi: 10.1002/igs.10042.
  2.  F. Azimifar, K. Hassani, A.H. Saveh, and F.T. Ghomsheh, “A medium invasiveness multi-level patient’s specific template for pedicle screw placement in the scoliosis surgery”, Biomed. Eng. Online 16, 130 (2017), doi: 10.1186/s12938-017-0421-0.
  3.  L. Yahia-Cherif, B. Gilles, T. Molet, and N. Magnenat-Thalmann, “Motion capture and visualization of the hip joint with dynamic MRI and optical systems”, Comp. Anim. Virtual Worlds 15, 377–385 (2004).
  4.  V. Pekar, T.R. McNutt, and M.R. Kaus, “Automated modelbased organ delineation for radiotherapy planning in prostatic region”, Int. J. Radiat. Oncol. Biol. Phys. 60(3), 973–980 (2004).
  5.  D. Ravì, et al., “Deep learning for health informatics,” IEEE J. Biomed. Health. Inf. 21(1), 4–21 (2017), doi: 10.1109/JBHI.2016.2636665.
  6.  G. Litjens, et al., “A survey on deep learning in medical image analysis”, Med. Image Anal. 42, 60–88 (2017), doi: 10.1016/j. media.2017.07.005.
  7.  Z. Krawczyk and J. Starzyński, “YOLO and morphingbased method for 3D individualised bone model creation”, 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom (2020), doi: 10.1109/IJCNN48605.2020.9206783.
  8.  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 3431–3440 (2015), doi: 10.1109/CVPR.2015.7298965.
  9.  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid Scene Parsing Network,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 6230–6239 (2017), doi: 10.1109/CVPR.2017.660.
  10.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation”, in Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol. 9351, Springer, Cham. (2015), doi: 10.1007/978-3-319-24574-4_28.
  11.  V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A deep convolutional encoder-decoder architecture for image segmentation”, IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017), doi: 10.1109/TPAMI.2016.2644615.
  12.  Z. Krawczyk and J. Starzyński, “Deep learning approach for bone model creation”, 2020 IEEE 21st International Conference on Computational Problems of Electrical Engineering (CPEE), (2020), doi: 10.1109/CPEE50798.2020.9238678.
  13.  W. Qin, J. Wu, F. Han, Y. Yuan, W. Zhao, B. Ibragimov, J. Gu, and L. Xing, “Superpixel-based and boundary-sensitive convolutional neural network for automated liver segmentation”, Phys. Med. Biol. 63(9), 95017 (2018), doi: 10.1088/1361‒6560/aabd19.
  14.  S. Nikolov, et al., “Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy”, Technical Report, ArXiv, (2018), doi: arXiv:1809.04430.
  15.  T.L. Kline, et al., “Performance of an artificial multi-observer deep neural network for fully automated segmentation of polycystic kidneys”, J Digit Imaging 30, 442–448 (2017), doi: 10.1007/s10278-017-9978-1.
  16.  A. Wadhwa, A. Bhardwaj, and V.S. Verma, “A review on brain tumor segmentation of MRI images”, Magn. Reson. Imaging 61, 247–259 (2019), doi: 10.1016/j.mri.2019.05.043.
  17.  J. Xu, X. Luo, G. Wang, H. Gilmore, and A. Madabhushi, “A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images”, Neurocomputing 191, 214–223 (2016), doi: 10.1016/j.neucom.2016.01.034.
  18.  Z. Swiderska-Chadaj, T. Markiewicz, J. Gallego, G. Bueno, B. Grala, and M. Lorent, “Deep learning for damaged tissue detection and segmentationin Ki-67 brain tumor specimens based on the U-net model”, Bull. Pol. Acad. Sci. Tech. Sci. 66(6), 849–856 (2018), doi: 10.24425/bpas.2018.125932.
  19.  S. Lindgren Belal, et. al., “Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CTbased 3D quantification of skeletal metastases”, Eur. J. Radiol. 113, 89–95 (2019), doi: 10.1016/j.ejrad.2019.01.028.
  20.  A. Klein, J. Warszawski, J. Hillengaß, and K.H. Maier-Hein, “Automatic bone segmentation in whole-body CT images”, Int J Comput Assist Radiol Surg. 14(1), 21–29 (2019), doi: 10.1007/s11548-018-1883-7.
  21.  J. Minnema, M. van Eijnatten, W. Kouw, F. Diblen, A. Mendrik, and J. Wolff, “CT image segmentation of bone for medical additive manufacturing using a convolutional neural network”, Comput. Biol. Med. 103, 130–139 (2018), https://doi.org/10.1016/j. compbiomed.2018.10.012.
  22.  T. Les, T. Markiewicz, T. Osowski, and M. Jesiotr, “Automatic reconstruction of overlapped cells in breast cancer FISH images”, Expert Syst. Appl. 137, 335–342 (2019).
  23.  F. Yokota, T. Okada, M. Takao, N. Sugano, Y. Tada, and Y. Sato, “Automated segmentation of the femur and pelvis from 3D CT data of diseased hip using hierarchical statistical shape model of joint structure”, Med Image Comput Comput Assist Interv., 811–818 (2019), doi: 10.1007/978-3-642-04271-3_98.
  24.  D. Gupta, “Semantic segmentation library”, accessed 19-Jan-202, [Online], Available: https: //divamgupta.com/image- segmentation/2019/06/06/ deep-learning-semantic-segmentation-keras.html.
  25.  A.B. Jung, et al., “Imgaug library”, accessed 01-Feb-2020, [Online], Available: https://github.com/aleju/imgaug (2020).
  26.  F. Chollet, et al., “Keras”, [Online], Available: https://keras.io, (2015).
  27.  M. Abadi, et al., “TensorFlow: Large-scale machine learning on heterogeneous systems”, [Online], Available: tensorflow.org, (2015).
  28.  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 770–778 (2016), doi: 10.1109/CVPR.2016.90.
  29.  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition”, CoRR, (2015).
  30.  O. Russakovsky, et al., “ImageNet large scale visual recognition challenge”, Int. J. Comput. Vision 115(3), 211–252 (2015), doi: 10.1007/ s11263-015-0816-y.
  31.  VGG network weights, [Online], Available: https://www.robots.ox.ac.uk/~vgg/research/very_deep/
  32.  Resnet network weights, [Online], Available: https://github.com/KaimingHe/deep-residual-networks.
  33.  P. Leydon, M. O’Connell, D. Greene, K. M. Curran, “Bone Segmentation in Contrast Enhanced Whole-Body Computed Tomography”, arXiv (2020), https://arxiv.org/abs/2008.05223.
Go to article

Authors and Affiliations

Zuzanna Krawczyk
1
Jacek Starzyński
1

  1. Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

Multi-focus image fusion is a method of increasing the image quality and preventing image redundancy. It is utilized in many fields such as medical diagnostic, surveillance, and remote sensing. There are various algorithms available nowadays. However, a common problem is still there, i.e. the method is not sufficient to handle the ghost effect and unpredicted noises. Computational intelligence has developed quickly over recent decades, followed by the rapid development of multi-focus image fusion. The proposed method is multi-focus image fusion based on an automatic encoder-decoder algorithm. It uses deeplabV3+ architecture. During the training process, it uses a multi-focus dataset and ground truth. Then, the model of the network is constructed through the training process. This model was adopted in the testing process of sets to predict the focus map. The testing process is semantic focus processing. Lastly, the fusion process involves a focus map and multi-focus images to configure the fused image. The results show that the fused images do not contain any ghost effects or any unpredicted tiny objects. The assessment metric of the proposed method uses two aspects. The first is the accuracy of predicting a focus map, the second is an objective assessment of the fused image such as mutual information, SSIM, and PSNR indexes. They show a high score of precision and recall. In addition, the indexes of SSIM, PSNR, and mutual information are high. The proposed method also has more stable performance compared with other methods. Finally, the Resnet50 model algorithm in multi-focus image fusion can handle the ghost effect problem well.
Go to article

Authors and Affiliations

K. Hawari
1
Ismail Ismail
1 2

  1. Universiti Malaysia Pahang, Faculty of Electrical and Electronics Engineering, 26300 Kuantan, Malaysia
  2. Politeknik Negeri Padang, Electrical Engineering Department, 25162, Padang, Indonesia

This page uses 'cookies'. Learn more