Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 3
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

For brain tumour treatment plans, the diagnoses and predictions made by medical doctors and radiologists are dependent on medical imaging. Obtaining clinically meaningful information from various imaging modalities such as computerized tomography (CT), positron emission tomography (PET) and magnetic resonance (MR) scans are the core methods in software and advanced screening utilized by radiologists. In this paper, a universal and complex framework for two parts of the dose control process – tumours detection and tumours area segmentation from medical images is introduced. The framework formed the implementation of methods to detect glioma tumour from CT and PET scans. Two deep learning pre-trained models: VGG19 and VGG19-BN were investigated and utilized to fuse CT and PET examinations results. Mask R-CNN (region-based convolutional neural network) was used for tumour detection – output of the model is bounding box coordinates for each object in the image – tumour. U-Net was used to perform semantic segmentation – segment malignant cells and tumour area. Transfer learning technique was used to increase the accuracy of models while having a limited collection of the dataset. Data augmentation methods were applied to generate and increase the number of training samples. The implemented framework can be utilized for other use-cases that combine object detection and area segmentation from grayscale and RGB images, especially to shape computer-aided diagnosis (CADx) and computer-aided detection (CADe) systems in the healthcare industry to facilitate and assist doctors and medical care providers.
Go to article

Bibliography

  1.  Cancer Research UK Statistics from the 5th of March 2020. [Online]. https://www.cancerresearchuk.org/health-professional/cancer- statistics/statistics-by-cancer-type/brain-other-cns-and-intracranial-tumours/incidence#ref-
  2.  E. Kot, Z. Krawczyk, K. Siwek, and P.S. Czwarnowski, “U-Net and Active Contour Methods for Brain Tumour Segmentation and Visualization,” 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, 2020, pp. 1‒7, doi: 10.1109/ IJCNN48605.2020.9207572.
  3.  J. Kim, J. Hong, H. Park, “Prospects of deep learning for medical imaging,” Precis. Future. Med. 2(2), 37–52 (2018), doi: 10.23838/ pfm.2018.00030.
  4.  E. Kot, Z. Krawczyk, and K. Siwek, “Brain Tumour Detection and Segmentation Using Deep Learning Methods,” in Computational Problems of Electrical Engineering, 2020.
  5.  A.F. Tamimi and M. Juweid, “Epidemiology and Outcome of Glioblastoma,” in: Glioblastoma [Online]. Brisbane (AU): Codon Publications, 2017, doi: 10.15586/codon.glioblastoma.2017.ch8.
  6.  A. Krizhevsky, I. Sutskever, and G.E. Hinton, “ImageNet classification with deep convolutional neural networks,” in: Advances in Neural Information Processing Systems, 2012, p. 1097‒1105.
  7.  M.A. Al-masni, et al., “Detection and classification of the breast abnormalities in digital mammograms via regional Convolutional Neural Network,” 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, 2017, pp. 1230‒1233, doi: 10.1109/EMBC.2017.8037053.
  8.  P. Yin, R. Yuan, Y. Cheng, and Q. Wu, “Deep Guidance Network for Biomedical Image Segmentation,” IEEE Access 8, 116106‒116116 (2020), doi: 10.1109/ACCESS.2020.3002835.
  9.  R. Sindhu, G. Jose, S. Shibon, and V. Varun, “Using YOLO based deep learning network for real time detection and localization of lung nodules from low dose CT scans”, Proc. SPIE 10575, Medical Imaging 2018: Computer-Aided Diagnosis, 105751I, 2018, doi: 10.1117/12.2293699.
  10.  R. Ezhilarasi and P. Varalakshmi, “Tumor Detection in the Brain using Faster R-CNN,” 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud), Palladam, India, 2018, pp. 388‒392, doi: 10.1109/I-SMAC.2018.8653705.
  11.  S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-timeobject detection with region proposal networks,” in Advances in neuralinformation processing systems, 2015, pp. 91–99.
  12.  S. Liu, H. Zheng, Y. Feng, and W. Li, “Prostate cancer diagnosis using deeplearning with 3D multiparametric MRI,” in Proceedings of Medical Imaging 2017: Computer-Aided Diagnosis, vol. 10134, Bellingham: International Society for Optics and Photonics (SPIE), 2017. p. 1013428.
  13.  M. Gurbină, M. Lascu, and D. Lascu, “Tumor Detection and Classification of MRI Brain Image using Different Wavelet Transforms and Support Vector Machines,” in 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 2019, pp. 505‒508, doi: 10.1109/TSP.2019.8769040.
  14.  H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, “Automatic brain tumor detection and segmentation using U-net based fully convolutional networks,” in: Medical image understanding and analysis, pp. 506‒517, eds. Valdes Hernandez M, Gonzalez-Castro V, Cham: Springer, 2017.
  15.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Lecture Notes in Computer Science, vol 9351, doi: 10.1007/978-3-319- 24574-4_28.
  16.  K. Hu, C. Liu, X. Yu, J. Zhang, Y. He, and H. Zhu, “A 2.5D Cancer Segmentation for MRI Images Based on U-Net,” in 2018 5th International Conference on Information Science and Control Engineering (ICISCE), Zhengzhou, 2018, pp. 6‒10, doi: 10.1109/ICISCE.2018.00011.
  17.  H.N.T.K. Kaldera, S.R. Gunasekara, and M.B. Dissanayake, “Brain tumor Classification and Segmentation using Faster R-CNN,” Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 2019, pp. 1‒6, doi: 10.1109/ ICASET.2019.8714263.
  18.  B. Stasiak, P. Tarasiuk, I. Michalska, and A. Tomczyk, “Application of convolutional neural networks with anatomical knowledge for brain MRI analysis in MS patients”, Bull. Pol. Acad. Sci. Tech. Sci. 66(6), 857–868 (2018), doi: 10.24425/bpas.2018.125933.
  19.  L. Hui, X. Wu, and J. Kittler, “Infrared and Visible Image Fusion Using a Deep Learning Framework,” 24th International Conference on Pattern Recognition (ICPR), Beijing, 2018, pp. 2705‒2710, doi: 10.1109/ICPR.2018.8546006.
  20.  K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  21.  M. Simon, E. Rodner, and J. Denzler, “ImageNet pre-trained models with batch normalization,” arXiv preprint arXiv:1612.01452, 2016.
  22.  VGG19-BN model implementation. [Online]. https://pytorch.org/vision/stable/_modules/torchvision/models/vgg.html
  23.  D. Jha, M.A. Riegler, D. Johansen, P. Halvorsen, and H.D. Johansen, “DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation,” 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 2020, pp. 558‒564, doi: 10.1109/CBMS49503.2020.00111.
  24.  Jupyter notebook with fusion code. [Online]. https://github.com/ekote/computer-vision-for-biomedical-images-processing/blob/master/ papers/polish_acad_of_scienc_2020_2021/fusion_PET_CT_2020.ipynb
  25.  E. Geremia et al., “Spatial decision forests for MS lesion segmentation in multi-channel magnetic resonance images”, NeuroImage 57(2), 378‒390 (2011).
  26.  D. Anithadevi and K. Perumal, “A hybrid approach based segmentation technique for brain tumor in MRI Images,” Signal Image Process.: Int. J. 7(1), 21‒30 (2016), doi: 10.5121/sipij.2016.7103.
  27.  S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv preprint arXiv:1502.03167.
  28.  S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137‒1149, (2017), doi: 10.1109/TPAMI.2016.2577031.
  29.  T-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. Lawrence Zitnick, “Microsoft COCO: common objects incontext” in Computer Vision – ECCV 2014, 2014, p. 740–755.
  30.  Original Mask R-CNN model. [Online]. https://github.com/matterport/Mask_RCNN/releases/tag/v2.0
  31.  Mask R-CNN model. [Online]. https://github.com/ekote/computer-vision-for-biomedical-images-processing/releases/tag/1.0, doi: 10.5281/ zenodo.3986798.
  32.  T. Les, T. Markiewicz, S. Osowski, and M. Jesiotr, “Automatic reconstruction of overlapped cells in breast cancer FISH images,” Expert Syst. Appl. 137, 335‒342 (2019), doi: 10.1016/j.eswa.2019.05.031.
  33.  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation”, Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2015, pp. 3431‒3440.
  34.  The U-Net architecture adjusted to 64£64 input image size. [Online]. http://bit.ly/unet64x64
Go to article

Authors and Affiliations

Estera Kot
1
Zuzanna Krawczyk
1
Krzysztof Siwek
1
Leszek Królicki
2
Piotr Czwarnowski
2

  1. Warsaw University of Technology, Faculty of Electrical Engineering, Pl. Politechniki 1, 00-661 Warsaw, Poland
  2. Medical University of Warsaw, Nuclear Medicine Department, ul. Banacha 1A, 02-097 Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

The liver is a vital organ of the human body and hepatic cancer is one of the major causes of cancer deaths. Early and rapid diagnosis can reduce the mortality rate. It can be achieved through computerized cancer diagnosis and surgery planning systems. Segmentation plays a major role in these systems. This work evaluated the efficacy of the SegNet model in liver and particle swarm optimization-based clustering technique in liver lesion segmentation. Over 2400 CT images were used for training the deep learning network and ten CT datasets for validating the algorithm. The segmentation results were satisfactory. The values for Dice Coefficient and volumetric overlap error achieved were 0.940 ± 0.022 and 0.112 ± 0.038, respectively for liver and the results for lesion delineation were 0.4629 ± 0.287 and 0.6986 ± 0.203, respectively. The proposed method is effective for liver segmentation. However, lesion segmentation needs to be further improved for better accuracy.
Go to article

Authors and Affiliations

P Vaidehi Nayantara
1
Surekha Kamath
1
Manjunath KN
2
Rajagopal Kadavigere
2

  1. Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India
  2. Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India
Download PDF Download RIS Download Bibtex

Abstract

The paper is focused on automatic segmentation task of bone structures out of CT data series of pelvic region. The authors trained and compared four different models of deep neural networks (FCN, PSPNet, U-net and Segnet) to perform the segmentation task of three following classes: background, patient outline and bones. The mean and class-wise Intersection over Union (IoU), Dice coefficient and pixel accuracy measures were evaluated for each network outcome. In the initial phase all of the networks were trained for 10 epochs. The most exact segmentation results were obtained with the use of U-net model, with mean IoU value equal to 93.2%. The results where further outperformed with the U-net model modification with ResNet50 model used as the encoder, trained by 30 epochs, which obtained following result: mIoU measure – 96.92%, “bone” class IoU – 92.87%, mDice coefficient – 98.41%, mDice coefficient for “bone” – 96.31%, mAccuracy – 99.85% and Accuracy for “bone” class – 99.92%.
Go to article

Bibliography

  1.  E. Stindel, et al., “Bone morphing: 3D morphological data for total knee arthroplasty” Comput. Aided Surg. 7(3), 156–168 (2002), doi: 10.1002/igs.10042.
  2.  F. Azimifar, K. Hassani, A.H. Saveh, and F.T. Ghomsheh, “A medium invasiveness multi-level patient’s specific template for pedicle screw placement in the scoliosis surgery”, Biomed. Eng. Online 16, 130 (2017), doi: 10.1186/s12938-017-0421-0.
  3.  L. Yahia-Cherif, B. Gilles, T. Molet, and N. Magnenat-Thalmann, “Motion capture and visualization of the hip joint with dynamic MRI and optical systems”, Comp. Anim. Virtual Worlds 15, 377–385 (2004).
  4.  V. Pekar, T.R. McNutt, and M.R. Kaus, “Automated modelbased organ delineation for radiotherapy planning in prostatic region”, Int. J. Radiat. Oncol. Biol. Phys. 60(3), 973–980 (2004).
  5.  D. Ravì, et al., “Deep learning for health informatics,” IEEE J. Biomed. Health. Inf. 21(1), 4–21 (2017), doi: 10.1109/JBHI.2016.2636665.
  6.  G. Litjens, et al., “A survey on deep learning in medical image analysis”, Med. Image Anal. 42, 60–88 (2017), doi: 10.1016/j. media.2017.07.005.
  7.  Z. Krawczyk and J. Starzyński, “YOLO and morphingbased method for 3D individualised bone model creation”, 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom (2020), doi: 10.1109/IJCNN48605.2020.9206783.
  8.  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 3431–3440 (2015), doi: 10.1109/CVPR.2015.7298965.
  9.  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid Scene Parsing Network,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 6230–6239 (2017), doi: 10.1109/CVPR.2017.660.
  10.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation”, in Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol. 9351, Springer, Cham. (2015), doi: 10.1007/978-3-319-24574-4_28.
  11.  V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A deep convolutional encoder-decoder architecture for image segmentation”, IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017), doi: 10.1109/TPAMI.2016.2644615.
  12.  Z. Krawczyk and J. Starzyński, “Deep learning approach for bone model creation”, 2020 IEEE 21st International Conference on Computational Problems of Electrical Engineering (CPEE), (2020), doi: 10.1109/CPEE50798.2020.9238678.
  13.  W. Qin, J. Wu, F. Han, Y. Yuan, W. Zhao, B. Ibragimov, J. Gu, and L. Xing, “Superpixel-based and boundary-sensitive convolutional neural network for automated liver segmentation”, Phys. Med. Biol. 63(9), 95017 (2018), doi: 10.1088/1361‒6560/aabd19.
  14.  S. Nikolov, et al., “Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy”, Technical Report, ArXiv, (2018), doi: arXiv:1809.04430.
  15.  T.L. Kline, et al., “Performance of an artificial multi-observer deep neural network for fully automated segmentation of polycystic kidneys”, J Digit Imaging 30, 442–448 (2017), doi: 10.1007/s10278-017-9978-1.
  16.  A. Wadhwa, A. Bhardwaj, and V.S. Verma, “A review on brain tumor segmentation of MRI images”, Magn. Reson. Imaging 61, 247–259 (2019), doi: 10.1016/j.mri.2019.05.043.
  17.  J. Xu, X. Luo, G. Wang, H. Gilmore, and A. Madabhushi, “A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images”, Neurocomputing 191, 214–223 (2016), doi: 10.1016/j.neucom.2016.01.034.
  18.  Z. Swiderska-Chadaj, T. Markiewicz, J. Gallego, G. Bueno, B. Grala, and M. Lorent, “Deep learning for damaged tissue detection and segmentationin Ki-67 brain tumor specimens based on the U-net model”, Bull. Pol. Acad. Sci. Tech. Sci. 66(6), 849–856 (2018), doi: 10.24425/bpas.2018.125932.
  19.  S. Lindgren Belal, et. al., “Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CTbased 3D quantification of skeletal metastases”, Eur. J. Radiol. 113, 89–95 (2019), doi: 10.1016/j.ejrad.2019.01.028.
  20.  A. Klein, J. Warszawski, J. Hillengaß, and K.H. Maier-Hein, “Automatic bone segmentation in whole-body CT images”, Int J Comput Assist Radiol Surg. 14(1), 21–29 (2019), doi: 10.1007/s11548-018-1883-7.
  21.  J. Minnema, M. van Eijnatten, W. Kouw, F. Diblen, A. Mendrik, and J. Wolff, “CT image segmentation of bone for medical additive manufacturing using a convolutional neural network”, Comput. Biol. Med. 103, 130–139 (2018), https://doi.org/10.1016/j. compbiomed.2018.10.012.
  22.  T. Les, T. Markiewicz, T. Osowski, and M. Jesiotr, “Automatic reconstruction of overlapped cells in breast cancer FISH images”, Expert Syst. Appl. 137, 335–342 (2019).
  23.  F. Yokota, T. Okada, M. Takao, N. Sugano, Y. Tada, and Y. Sato, “Automated segmentation of the femur and pelvis from 3D CT data of diseased hip using hierarchical statistical shape model of joint structure”, Med Image Comput Comput Assist Interv., 811–818 (2019), doi: 10.1007/978-3-642-04271-3_98.
  24.  D. Gupta, “Semantic segmentation library”, accessed 19-Jan-202, [Online], Available: https: //divamgupta.com/image- segmentation/2019/06/06/ deep-learning-semantic-segmentation-keras.html.
  25.  A.B. Jung, et al., “Imgaug library”, accessed 01-Feb-2020, [Online], Available: https://github.com/aleju/imgaug (2020).
  26.  F. Chollet, et al., “Keras”, [Online], Available: https://keras.io, (2015).
  27.  M. Abadi, et al., “TensorFlow: Large-scale machine learning on heterogeneous systems”, [Online], Available: tensorflow.org, (2015).
  28.  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 770–778 (2016), doi: 10.1109/CVPR.2016.90.
  29.  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition”, CoRR, (2015).
  30.  O. Russakovsky, et al., “ImageNet large scale visual recognition challenge”, Int. J. Comput. Vision 115(3), 211–252 (2015), doi: 10.1007/ s11263-015-0816-y.
  31.  VGG network weights, [Online], Available: https://www.robots.ox.ac.uk/~vgg/research/very_deep/
  32.  Resnet network weights, [Online], Available: https://github.com/KaimingHe/deep-residual-networks.
  33.  P. Leydon, M. O’Connell, D. Greene, K. M. Curran, “Bone Segmentation in Contrast Enhanced Whole-Body Computed Tomography”, arXiv (2020), https://arxiv.org/abs/2008.05223.
Go to article

Authors and Affiliations

Zuzanna Krawczyk
1
Jacek Starzyński
1

  1. Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warsaw, Poland

This page uses 'cookies'. Learn more