Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 2
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

The paper is focused on automatic segmentation task of bone structures out of CT data series of pelvic region. The authors trained and compared four different models of deep neural networks (FCN, PSPNet, U-net and Segnet) to perform the segmentation task of three following classes: background, patient outline and bones. The mean and class-wise Intersection over Union (IoU), Dice coefficient and pixel accuracy measures were evaluated for each network outcome. In the initial phase all of the networks were trained for 10 epochs. The most exact segmentation results were obtained with the use of U-net model, with mean IoU value equal to 93.2%. The results where further outperformed with the U-net model modification with ResNet50 model used as the encoder, trained by 30 epochs, which obtained following result: mIoU measure – 96.92%, “bone” class IoU – 92.87%, mDice coefficient – 98.41%, mDice coefficient for “bone” – 96.31%, mAccuracy – 99.85% and Accuracy for “bone” class – 99.92%.
Go to article

Bibliography

  1.  E. Stindel, et al., “Bone morphing: 3D morphological data for total knee arthroplasty” Comput. Aided Surg. 7(3), 156–168 (2002), doi: 10.1002/igs.10042.
  2.  F. Azimifar, K. Hassani, A.H. Saveh, and F.T. Ghomsheh, “A medium invasiveness multi-level patient’s specific template for pedicle screw placement in the scoliosis surgery”, Biomed. Eng. Online 16, 130 (2017), doi: 10.1186/s12938-017-0421-0.
  3.  L. Yahia-Cherif, B. Gilles, T. Molet, and N. Magnenat-Thalmann, “Motion capture and visualization of the hip joint with dynamic MRI and optical systems”, Comp. Anim. Virtual Worlds 15, 377–385 (2004).
  4.  V. Pekar, T.R. McNutt, and M.R. Kaus, “Automated modelbased organ delineation for radiotherapy planning in prostatic region”, Int. J. Radiat. Oncol. Biol. Phys. 60(3), 973–980 (2004).
  5.  D. Ravì, et al., “Deep learning for health informatics,” IEEE J. Biomed. Health. Inf. 21(1), 4–21 (2017), doi: 10.1109/JBHI.2016.2636665.
  6.  G. Litjens, et al., “A survey on deep learning in medical image analysis”, Med. Image Anal. 42, 60–88 (2017), doi: 10.1016/j. media.2017.07.005.
  7.  Z. Krawczyk and J. Starzyński, “YOLO and morphingbased method for 3D individualised bone model creation”, 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom (2020), doi: 10.1109/IJCNN48605.2020.9206783.
  8.  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 3431–3440 (2015), doi: 10.1109/CVPR.2015.7298965.
  9.  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid Scene Parsing Network,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 6230–6239 (2017), doi: 10.1109/CVPR.2017.660.
  10.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation”, in Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol. 9351, Springer, Cham. (2015), doi: 10.1007/978-3-319-24574-4_28.
  11.  V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A deep convolutional encoder-decoder architecture for image segmentation”, IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017), doi: 10.1109/TPAMI.2016.2644615.
  12.  Z. Krawczyk and J. Starzyński, “Deep learning approach for bone model creation”, 2020 IEEE 21st International Conference on Computational Problems of Electrical Engineering (CPEE), (2020), doi: 10.1109/CPEE50798.2020.9238678.
  13.  W. Qin, J. Wu, F. Han, Y. Yuan, W. Zhao, B. Ibragimov, J. Gu, and L. Xing, “Superpixel-based and boundary-sensitive convolutional neural network for automated liver segmentation”, Phys. Med. Biol. 63(9), 95017 (2018), doi: 10.1088/1361‒6560/aabd19.
  14.  S. Nikolov, et al., “Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy”, Technical Report, ArXiv, (2018), doi: arXiv:1809.04430.
  15.  T.L. Kline, et al., “Performance of an artificial multi-observer deep neural network for fully automated segmentation of polycystic kidneys”, J Digit Imaging 30, 442–448 (2017), doi: 10.1007/s10278-017-9978-1.
  16.  A. Wadhwa, A. Bhardwaj, and V.S. Verma, “A review on brain tumor segmentation of MRI images”, Magn. Reson. Imaging 61, 247–259 (2019), doi: 10.1016/j.mri.2019.05.043.
  17.  J. Xu, X. Luo, G. Wang, H. Gilmore, and A. Madabhushi, “A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images”, Neurocomputing 191, 214–223 (2016), doi: 10.1016/j.neucom.2016.01.034.
  18.  Z. Swiderska-Chadaj, T. Markiewicz, J. Gallego, G. Bueno, B. Grala, and M. Lorent, “Deep learning for damaged tissue detection and segmentationin Ki-67 brain tumor specimens based on the U-net model”, Bull. Pol. Acad. Sci. Tech. Sci. 66(6), 849–856 (2018), doi: 10.24425/bpas.2018.125932.
  19.  S. Lindgren Belal, et. al., “Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CTbased 3D quantification of skeletal metastases”, Eur. J. Radiol. 113, 89–95 (2019), doi: 10.1016/j.ejrad.2019.01.028.
  20.  A. Klein, J. Warszawski, J. Hillengaß, and K.H. Maier-Hein, “Automatic bone segmentation in whole-body CT images”, Int J Comput Assist Radiol Surg. 14(1), 21–29 (2019), doi: 10.1007/s11548-018-1883-7.
  21.  J. Minnema, M. van Eijnatten, W. Kouw, F. Diblen, A. Mendrik, and J. Wolff, “CT image segmentation of bone for medical additive manufacturing using a convolutional neural network”, Comput. Biol. Med. 103, 130–139 (2018), https://doi.org/10.1016/j. compbiomed.2018.10.012.
  22.  T. Les, T. Markiewicz, T. Osowski, and M. Jesiotr, “Automatic reconstruction of overlapped cells in breast cancer FISH images”, Expert Syst. Appl. 137, 335–342 (2019).
  23.  F. Yokota, T. Okada, M. Takao, N. Sugano, Y. Tada, and Y. Sato, “Automated segmentation of the femur and pelvis from 3D CT data of diseased hip using hierarchical statistical shape model of joint structure”, Med Image Comput Comput Assist Interv., 811–818 (2019), doi: 10.1007/978-3-642-04271-3_98.
  24.  D. Gupta, “Semantic segmentation library”, accessed 19-Jan-202, [Online], Available: https: //divamgupta.com/image- segmentation/2019/06/06/ deep-learning-semantic-segmentation-keras.html.
  25.  A.B. Jung, et al., “Imgaug library”, accessed 01-Feb-2020, [Online], Available: https://github.com/aleju/imgaug (2020).
  26.  F. Chollet, et al., “Keras”, [Online], Available: https://keras.io, (2015).
  27.  M. Abadi, et al., “TensorFlow: Large-scale machine learning on heterogeneous systems”, [Online], Available: tensorflow.org, (2015).
  28.  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 770–778 (2016), doi: 10.1109/CVPR.2016.90.
  29.  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition”, CoRR, (2015).
  30.  O. Russakovsky, et al., “ImageNet large scale visual recognition challenge”, Int. J. Comput. Vision 115(3), 211–252 (2015), doi: 10.1007/ s11263-015-0816-y.
  31.  VGG network weights, [Online], Available: https://www.robots.ox.ac.uk/~vgg/research/very_deep/
  32.  Resnet network weights, [Online], Available: https://github.com/KaimingHe/deep-residual-networks.
  33.  P. Leydon, M. O’Connell, D. Greene, K. M. Curran, “Bone Segmentation in Contrast Enhanced Whole-Body Computed Tomography”, arXiv (2020), https://arxiv.org/abs/2008.05223.
Go to article

Authors and Affiliations

Zuzanna Krawczyk
1
Jacek Starzyński
1

  1. Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

This work presents an automatic system for generating kidney boundaries in computed tomography (CT) images. This paper presents the main points of medical image processing, which are the parts of the developed system. The U-Net network was used for image segmentation, which is now widely used as a standard solution for many medical image processing tasks. An innovative solution for framing the input data has been implemented to improve the quality of the learning data as well as to reduce the size of the data. Precision-recall analysis was performed to calculate the optimal image threshold value. To eliminate false-positive errors, which are a common issue in segmentation based on neural networks, the volumetric analysis of coherent areas was applied. The developed system facilitates a fully automatic generation of kidney boundaries as well as the generation of a three-dimensional kidney model. The system can be helpful for people who deal with the analysis of medical images, medical specialists in medical centers, especially for those who perform the descriptions of CT examination. The system works fully automatically and can help to increase the accuracy of the performed medical diagnosis and reduce the time of preparing medical descriptions.
Go to article

Bibliography

  1.  Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition”, Proc. IEEE 86(11), 2278‒2324 (1998), doi: 10.1109/5.726791.
  2.  F. Isensee, “An attempt at beating the 3D U-Net”, ed. K.H. Maier-Hein, 2019.
  3.  Ö. Çiçek, A. Abdulkadir, S.S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation”, in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016), 424‒432, Springer International Publishing, 2016.
  4.  C. Li, W. Chen, and Y. Tan, “Render U-Net: A Unique Perspective on Render to Explore Accurate Medical Image Segmentation”, Appl. Sci. 10(18), 6439 (2020), doi: 10.3390/app10186439.
  5.  Z. Fatemeh, S. Nicola, K. Satheesh, and U. Eranga, “Ensemble U‐net‐based method for fully automated detection and segmentation of renal masses on computed tomography images”, Med. Phys. 47(9), 4032‒4044 (2020), doi: 10.1002/mp.14193.
  6.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation”, ArXiv, abs/1505.04597, 2015.
  7.  M.E.J. Ferlay, F. Lam, M. Colombet, L. and Mery. “Global Cancer Observatory: Cancer Today.” [Online] Available: https://gco.iarc.fr/ today, accessed (accessed).
  8.  P.A. Humphrey, H. Moch, A.L. Cubilla, T. M. Ulbright, and V.E. Reuter, “The 2016 WHO Classification of Tumours of the Urinary System and Male Genital Organs-Part B: Prostate and Bladder Tumours”, Eur. Urol. 70(1), 106‒119 (2016), doi: 10.1016/j.eururo.2016.02.028.
  9.  D.L. Pham, C. Xu, and J.L. Prince, “Current Methods in Medical Image Segmentation”, Ann. Rev. Biomed. Eng. 2(1), 315‒337 (2000), doi: 10.1146/annurev.bioeng.2.1.315.
  10.  B. Tsagaan, A. Shimizu, H. Kobatake, and K. Miyakawa, “An Automated Segmentation Method of Kidney Using Statistical Information”, in Medical Image Computing and Computer-Assisted Intervention — MICCAI 2002, pp. 556‒563, Springer Berlin Heidelberg, 2002.
  11.  J.C. Bezdek, “Objective Function Clustering”, in Pattern Recognition with Fuzzy Objective Function Algorithms , pp. 43‒93, Boston: Springer US, 1981.
  12.  K. Sharma et al., “Automatic Segmentation of Kidneys using Deep Learning for Total Kidney Volume Quantification in Autosomal Dominant Polycystic Kidney Disease”, Sci. Rep. 7(1), 2049 (2017), doi: 10.1038/s41598-017-01779-0.
  13.  P. Jackson, N. Hardcastle, N. Dawe, T. Kron, M.S. Hofman, and R. J. Hicks, “Deep Learning Renal Segmentation for Fully Automated Radiation Dose Estimation in Unsealed Source Therapy”, Front. Oncol. 14(8), 215, (2018), doi: 10.3389/fonc.2018.00215.
  14.  C. Li, W. Chen, and Y. Tan, “Point-Sampling Method Based on 3D U-Net Architecture to Reduce the Influence of False Positive and Solve Boundary Blur Problem in 3D CT Image Segmentation”, Appl. Sci. 10(19), 6838 (2020).
  15.  A. Myronenko and A. Hatamizadeh, “3d kidneys and kidney tumor semantic segmentation using boundary-aware networks”, arXiv preprint arXiv:1909.06684, 2019.
  16.  W. Zhao, D. Jiang, J. P. Queralta, and T. Westerlund, “Multi-Scale Supervised 3D U-Net for Kidneys and Kidney Tumor Segmentation”, arXiv preprint arXiv:2004.08108, 2020.
  17.  W. Zhao, D. Jiang, J. Peña Queralta, and T. Westerlund, “MSS U-Net: 3D segmentation of kidneys and tumors from CT images with a multi-scale supervised U-Net”, Inform. Med. Unlocked 19, 100357 (2020), doi: 10.1016/j.imu.2020.100357.
  18.  Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series”, in The handbook of brain theory and neural networks, pp. 255–258, MIT Press, 1998.
  19.  T. Les, T. Markiewicz, M. Dziekiewicz, and M. Lorent, “Kidney Boundary Detection Algorithm Based on Extended Maxima Transformations for Computed Tomography Diagnosis”, Appl. Sci. 10(21), 7512 (2020), doi: 10.3390/app10217512.
  20.  Z. Swiderska-Chadaj, T. Markiewicz, J. Gallego, G. Bueno, B. Grala, and M. Lorent, “Deep learning for damaged tissue detection and segmentation in Ki-67 brain tumor specimens based on the U-net model”, Bull. Pol. Acad. Sci. Tech. Sci. 66(6), 849‒856 (2018).
  21.  W. Wieclawek, “3D marker-controlled watershed for kidney segmentation in clinical CT exams”, Biomed. Eng. Online 17(1), 26 (2018), doi: 10.1186/s12938-018-0456-x.
  22.  T. Les, “Patch-based renal CTA image segmentation with U-Net”, in 2020 IEEE 21st International Conference on Computational Problems of Electrical Engineering (CPEE), Poland, 2020, pp. 1‒4, doi: 10.1109/CPEE50798.2020.9238735.
Go to article

Authors and Affiliations

Tomasz Les
1

  1. Faculty of Electrical Engineering, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warszawa, Poland

This page uses 'cookies'. Learn more