Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 6
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

This paper proposes a new approach to the processing and analysis of medical images. We introduce the term and methodology of medical data understanding, as a new step in the way of starting from image processing, and followed by analysis and classification (recognition). The general view of the situation of the new technology of machine perception and image understanding in the context of the more well known and classic techniques of image processing, analysis, segmentation and classification is shown below

Go to article

Authors and Affiliations

R. Tadeusiewicz
M.R. Ogiela
Download PDF Download RIS Download Bibtex

Abstract

In modern conditions in the field of medicine, raster image analysis systems are becoming more widespread, which allow automating the process of establishing a diagnosis based on the results of instrumental monitoring of a patient. One of the most important stages of such an analysis is the detection of the mask of the object to be recognized on the image. It is shown that under the conditions of a multivariate and multifactorial task of analyzing medical images, the most promising are neural network tools for extracting masks. It has also been determined that the known detection tools are highly specialized and not sufficiently adapted to the variability of the conditions of use, which necessitates the construction of an effective neural network model adapted to the definition of a mask on medical images. An approach is proposed to determine the most effective type of neural network model, which provides for expert evaluation of the effectiveness of acceptable types of models and conducting computer experiments to make a final decision. It is shown that to evaluate the effectiveness of a neural network model, it is possible to use the Intersection over Union and Dice Loss metrics. The proposed solutions were verified by isolating the brachial plexus of nerve fibers on grayscale images presented in the public Ultrasound Nerve Segmentation database. The expediency of using neural network models U-Net, YOLOv4 and PSPNet was determined by expert evaluation, and with the help of computer experiments, it was proved that U-Net is the most effective in terms of Intersection over Union and Dice Loss, which provides a detection accuracy of about 0.89. Also, the analysis of the results of the experiments showed the need to improve the mathematical apparatus, which is used to calculate the mask detection indicators.
Go to article

Authors and Affiliations

I. Tereikovskyi
1
Oleksandr Korchenko
S. Bushuyev
2
O. Tereikovskyi
3
Ruslana Ziubina
Olga Veselska

  1. Department of System Programming and Specialised Computer Systems of the National Technical University of Ukraine, Igor Sikorsky Kyiv Polytechnic Institute, Ukraine
  2. Department of Project Management Kyiv National University of Construction and Architecture, Ukraine
  3. Department of Information Technology Security of National Aviation University, Kyiv, Ukraine
Download PDF Download RIS Download Bibtex

Abstract

For brain tumour treatment plans, the diagnoses and predictions made by medical doctors and radiologists are dependent on medical imaging. Obtaining clinically meaningful information from various imaging modalities such as computerized tomography (CT), positron emission tomography (PET) and magnetic resonance (MR) scans are the core methods in software and advanced screening utilized by radiologists. In this paper, a universal and complex framework for two parts of the dose control process – tumours detection and tumours area segmentation from medical images is introduced. The framework formed the implementation of methods to detect glioma tumour from CT and PET scans. Two deep learning pre-trained models: VGG19 and VGG19-BN were investigated and utilized to fuse CT and PET examinations results. Mask R-CNN (region-based convolutional neural network) was used for tumour detection – output of the model is bounding box coordinates for each object in the image – tumour. U-Net was used to perform semantic segmentation – segment malignant cells and tumour area. Transfer learning technique was used to increase the accuracy of models while having a limited collection of the dataset. Data augmentation methods were applied to generate and increase the number of training samples. The implemented framework can be utilized for other use-cases that combine object detection and area segmentation from grayscale and RGB images, especially to shape computer-aided diagnosis (CADx) and computer-aided detection (CADe) systems in the healthcare industry to facilitate and assist doctors and medical care providers.
Go to article

Bibliography

  1.  Cancer Research UK Statistics from the 5th of March 2020. [Online]. https://www.cancerresearchuk.org/health-professional/cancer- statistics/statistics-by-cancer-type/brain-other-cns-and-intracranial-tumours/incidence#ref-
  2.  E. Kot, Z. Krawczyk, K. Siwek, and P.S. Czwarnowski, “U-Net and Active Contour Methods for Brain Tumour Segmentation and Visualization,” 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, 2020, pp. 1‒7, doi: 10.1109/ IJCNN48605.2020.9207572.
  3.  J. Kim, J. Hong, H. Park, “Prospects of deep learning for medical imaging,” Precis. Future. Med. 2(2), 37–52 (2018), doi: 10.23838/ pfm.2018.00030.
  4.  E. Kot, Z. Krawczyk, and K. Siwek, “Brain Tumour Detection and Segmentation Using Deep Learning Methods,” in Computational Problems of Electrical Engineering, 2020.
  5.  A.F. Tamimi and M. Juweid, “Epidemiology and Outcome of Glioblastoma,” in: Glioblastoma [Online]. Brisbane (AU): Codon Publications, 2017, doi: 10.15586/codon.glioblastoma.2017.ch8.
  6.  A. Krizhevsky, I. Sutskever, and G.E. Hinton, “ImageNet classification with deep convolutional neural networks,” in: Advances in Neural Information Processing Systems, 2012, p. 1097‒1105.
  7.  M.A. Al-masni, et al., “Detection and classification of the breast abnormalities in digital mammograms via regional Convolutional Neural Network,” 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, 2017, pp. 1230‒1233, doi: 10.1109/EMBC.2017.8037053.
  8.  P. Yin, R. Yuan, Y. Cheng, and Q. Wu, “Deep Guidance Network for Biomedical Image Segmentation,” IEEE Access 8, 116106‒116116 (2020), doi: 10.1109/ACCESS.2020.3002835.
  9.  R. Sindhu, G. Jose, S. Shibon, and V. Varun, “Using YOLO based deep learning network for real time detection and localization of lung nodules from low dose CT scans”, Proc. SPIE 10575, Medical Imaging 2018: Computer-Aided Diagnosis, 105751I, 2018, doi: 10.1117/12.2293699.
  10.  R. Ezhilarasi and P. Varalakshmi, “Tumor Detection in the Brain using Faster R-CNN,” 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud), Palladam, India, 2018, pp. 388‒392, doi: 10.1109/I-SMAC.2018.8653705.
  11.  S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-timeobject detection with region proposal networks,” in Advances in neuralinformation processing systems, 2015, pp. 91–99.
  12.  S. Liu, H. Zheng, Y. Feng, and W. Li, “Prostate cancer diagnosis using deeplearning with 3D multiparametric MRI,” in Proceedings of Medical Imaging 2017: Computer-Aided Diagnosis, vol. 10134, Bellingham: International Society for Optics and Photonics (SPIE), 2017. p. 1013428.
  13.  M. Gurbină, M. Lascu, and D. Lascu, “Tumor Detection and Classification of MRI Brain Image using Different Wavelet Transforms and Support Vector Machines,” in 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 2019, pp. 505‒508, doi: 10.1109/TSP.2019.8769040.
  14.  H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, “Automatic brain tumor detection and segmentation using U-net based fully convolutional networks,” in: Medical image understanding and analysis, pp. 506‒517, eds. Valdes Hernandez M, Gonzalez-Castro V, Cham: Springer, 2017.
  15.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Lecture Notes in Computer Science, vol 9351, doi: 10.1007/978-3-319- 24574-4_28.
  16.  K. Hu, C. Liu, X. Yu, J. Zhang, Y. He, and H. Zhu, “A 2.5D Cancer Segmentation for MRI Images Based on U-Net,” in 2018 5th International Conference on Information Science and Control Engineering (ICISCE), Zhengzhou, 2018, pp. 6‒10, doi: 10.1109/ICISCE.2018.00011.
  17.  H.N.T.K. Kaldera, S.R. Gunasekara, and M.B. Dissanayake, “Brain tumor Classification and Segmentation using Faster R-CNN,” Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 2019, pp. 1‒6, doi: 10.1109/ ICASET.2019.8714263.
  18.  B. Stasiak, P. Tarasiuk, I. Michalska, and A. Tomczyk, “Application of convolutional neural networks with anatomical knowledge for brain MRI analysis in MS patients”, Bull. Pol. Acad. Sci. Tech. Sci. 66(6), 857–868 (2018), doi: 10.24425/bpas.2018.125933.
  19.  L. Hui, X. Wu, and J. Kittler, “Infrared and Visible Image Fusion Using a Deep Learning Framework,” 24th International Conference on Pattern Recognition (ICPR), Beijing, 2018, pp. 2705‒2710, doi: 10.1109/ICPR.2018.8546006.
  20.  K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  21.  M. Simon, E. Rodner, and J. Denzler, “ImageNet pre-trained models with batch normalization,” arXiv preprint arXiv:1612.01452, 2016.
  22.  VGG19-BN model implementation. [Online]. https://pytorch.org/vision/stable/_modules/torchvision/models/vgg.html
  23.  D. Jha, M.A. Riegler, D. Johansen, P. Halvorsen, and H.D. Johansen, “DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation,” 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 2020, pp. 558‒564, doi: 10.1109/CBMS49503.2020.00111.
  24.  Jupyter notebook with fusion code. [Online]. https://github.com/ekote/computer-vision-for-biomedical-images-processing/blob/master/ papers/polish_acad_of_scienc_2020_2021/fusion_PET_CT_2020.ipynb
  25.  E. Geremia et al., “Spatial decision forests for MS lesion segmentation in multi-channel magnetic resonance images”, NeuroImage 57(2), 378‒390 (2011).
  26.  D. Anithadevi and K. Perumal, “A hybrid approach based segmentation technique for brain tumor in MRI Images,” Signal Image Process.: Int. J. 7(1), 21‒30 (2016), doi: 10.5121/sipij.2016.7103.
  27.  S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv preprint arXiv:1502.03167.
  28.  S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137‒1149, (2017), doi: 10.1109/TPAMI.2016.2577031.
  29.  T-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. Lawrence Zitnick, “Microsoft COCO: common objects incontext” in Computer Vision – ECCV 2014, 2014, p. 740–755.
  30.  Original Mask R-CNN model. [Online]. https://github.com/matterport/Mask_RCNN/releases/tag/v2.0
  31.  Mask R-CNN model. [Online]. https://github.com/ekote/computer-vision-for-biomedical-images-processing/releases/tag/1.0, doi: 10.5281/ zenodo.3986798.
  32.  T. Les, T. Markiewicz, S. Osowski, and M. Jesiotr, “Automatic reconstruction of overlapped cells in breast cancer FISH images,” Expert Syst. Appl. 137, 335‒342 (2019), doi: 10.1016/j.eswa.2019.05.031.
  33.  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation”, Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2015, pp. 3431‒3440.
  34.  The U-Net architecture adjusted to 64£64 input image size. [Online]. http://bit.ly/unet64x64
Go to article

Authors and Affiliations

Estera Kot
1
Zuzanna Krawczyk
1
Krzysztof Siwek
1
Leszek Królicki
2
Piotr Czwarnowski
2

  1. Warsaw University of Technology, Faculty of Electrical Engineering, Pl. Politechniki 1, 00-661 Warsaw, Poland
  2. Medical University of Warsaw, Nuclear Medicine Department, ul. Banacha 1A, 02-097 Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

Nowadays, Medical imaging modalities like Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Single Photon Emission Tomography (SPECT), and Computed Tomography (CT) play a crucial role in clinical diagnosis and treatment planning. The images obtained from each of these modalities contain complementary information of the organ imaged. Image fusion algorithms are employed to bring all of this disparate information together into a single image, allowing doctors to diagnose disorders quickly. This paper proposes a novel technique for the fusion of MRI and PET images based on YUV color space and wavelet transform. Quality assessment based on entropy showed that the method can achieve promising results for medical image fusion. The paper has done a comparative analysis of the fusion of MRI and PET images using different wavelet families at various decomposition levels for the detection of brain tumors as well as Alzheimer’s disease. The quality assessment and visual analysis showed that the Dmey wavelet at decomposition level 3 is optimum for the fusion of MRI and PET images. This paper also compared the results of several fusion rules such as average, maximum, and minimum, finding that the maximum fusion rule outperformed the other two.
Go to article

Authors and Affiliations

Jinu Sebastian
1
G.R. Gnana King
1

  1. Sahrdaya College of Engineering and Technology, Thrissur, Kerala, India under APJ Abdul Kalam Technological University
Download PDF Download RIS Download Bibtex

Abstract

Recently, the analysis of medical imaging is gaining substantial research interest, due to advancements in the computer vision field. Automation of medical image analysis can significantly improve the diagnosis process and lead to better prioritization of patients waiting for medical consultation. This research is dedicated to building a multi-feature ensemble model which associates two independent methods of image description: textural features and deep learning. Different algorithms of classification were applied to single-phase computed tomography images containing 8 subtypes of renal neoplastic lesions. The final ensemble includes a textural description combined with a support vector machine and various configurations of Convolutional Neural Networks. Results of experimental tests have proved that such a model can achieve 93.6% of weighted F1-score (tested in 10-fold cross validation mode). Improvement of performance of the best individual predictor totalled 3.5 percentage points.
Go to article

Bibliography

  1.  “Kidney cancer statistics”, Cancer Research UK, (2020). [Online]. Available: https://www.cancerresearchuk.org/health-professional/ cancer-statistics/statistics-by-cancer-type/kidney-cancer. [Accessed: 05-Oct-2020].
  2.  L. Zhou et al., “A Deep Learning-Based Radiomics Model for Differentiating Benign and Malignant Renal Tumors”, Translational Oncology 12(2), 292‒300, (2019).
  3.  H. Coy et al., “Deep learning and radiomics: the utility of Google TensorFlowTM Inception in classifying clear cell renal cell carcinoma and oncocytoma on multiphasic CT”, Abdominal Radiology 44(6), 2009‒2020, (2019).
  4.  S. Tabibu, P.K. Vinod, and C.V. Jawahar, “Pan-Renal Cell Carcinoma classification and survival prediction from histopathology images using deep learning”, Scientific Reports 9(1), 10509, (2019).
  5.  S. Han, S.I. Hwang, and H.J. Lee, “The Classification of Renal Cancer in 3-Phase CT Images Using a Deep Learning Method”, Journal of Digital Imaging 32, 638–643, (2019).
  6.  Q. Chaudry, S.H. Raza, A.N. Young, and M.D.Wang, “Automated renal cell carcinoma subtype classification using morphological, textural and wavelets based features”, Journal of Signal Processing Systems 55(1‒3), 15‒23, (2009).
  7.  B. Kocak et al., “Textural differences between renal cell carcinoma subtypes: Machine learning-based quantitative computed tomography texture analysis with independent external validation”, European Journal of Radiology, 107, 149‒157, (2018).
  8.  S.P. Raman, Y. Chen, J.L. Schroeder, P. Huang, and E.K. Fishman, “CT texture analysis of renal masses: pilot study using random forest classification for prediction of pathology”, Academic Radiology, 12, 1587‒1596, (2014).
  9.  W. Sun, B. Zheng, and W. Qian, “Computer aided lung cancer diagnosis with deep learning algorithms”, Proceedings of the International Society for Optics and Photonics Conference (2016).
  10.  H. Polat and D.M. Homay, “Classification of Pulmonary CT Images by Using Hybrid 3D-Deep Convolutional Neural Network Architecture”, Applied Sciences, 9(5), 940, (2019).
  11.  W. Alakwaa, M. Nassef, and A. Badr, “Lung cancer Detection and Classification with 3D Convolutional Neural Network (3DCNN)”, International Journal of Advanced Computer Science and Applications (IJACSA) 8(8), (2017).
  12.  M.A. Hussain, G. Hamarneh, and R. Garbi, “Renal Cell Carcinoma Staging with Learnable Image Histogram-Based Deep Neural Network”, Lecture Notes in Computer Science, 11861, 533‒540, (2019).
  13.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation”, Medical Image Computing and Computer-Assisted Intervention (MICCAI), 9351, (2015).
  14.  K. Yin et al., “Deep learning segmentation of kidneys with renal cell carcinoma”, Journal of Clinical Oncology 37(15), (2019).
  15.  J. Kurek et al., “Deep learning versus classical neural approach to mammogram recognition”, Bul. Pol. Acad. Sci. Tech. Sci. 66(6), 831‒840, (2018).
  16.  A. Osowska-Kurczab, T. Markiewicz, M. Dziekiewicz and M. Lorent, “Textural and deep learning methods in recognition of renal cancer types based on CT images”, Proceedings of the International Joint Conference on Neural Networks (IJCNN), (2020).
  17.  A. Osowska-Kurczab, T. Markiewicz, M. Dziekiewicz, and M. Lorent, “Combining texture analysis and deep learning in renal tumour classification task”, Proceedings of the Computational Problems of Electrical Engineering (CPEE), (2020).
  18.  R.M. Haralick, K. Shanmugam, and I. Dinstein, “Textural Features for Image Classification”, IEEE Transactions on Systems, Man and Cybernetics, SMC-3(6), (1973).
  19.  A.F. Costa, G. Humpire-Mamani, and A.J.M. Traina, “An Efficient Algorithm for Fractal Analysis of Textures,”, Proceedings of 25th SIBGRAPI Conference on Graphics, Patterns and Images, 39‒46, (2012).
  20.  P. Shanmugavadivu and V. Sivakumar, “Fractal Dimension Based Texture Analysis of Digital Images”, Procedia Engineering, 38, 2981‒2986, (2012).
  21.  M. Unser, “Local Linear Transforms for Texture Analysis”, Proceedings of the 7th IEEE International Conference on Pattern Recognition (ICPR), II, 1206‒1208, (1984).
  22.  M. Unser, “Sum and difference histograms for texture classification”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 118‒125, (1986).
  23.  C. Cortes and V. Vapnik, “Support-vector network”, Machine Learning 20(3), 273‒297, (1995).
  24.  Y. Bengio, “Learning Deep Architectures for AI”, Foundations and Trends in Machine Learning 2 (1), 1–127, (2009).
  25.  Y. Bengio, Y. LeCun, and G. Hinton, “Deep Learning”, Nature 521, 436–444, (2015).
  26.  I. Goodfellow, Y. Bengio, and A. Courville: Deep Learning, MIT Press, 2016.
  27.  A. Krizhevsky, I. Sutskever, and G. Hinton, “Image net classification with deep convolutional neural networks”, Advances in Neural Information Processing Systems 25, 1‒9, (2012).
  28.  K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition”, Proceedings of the 29th IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 770‒778, (2016).
  29.  C. Szegedy et al., “Going deeper with convolutions”, Proceedings of the 28th IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 1‒9, (2015).
  30.  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision”, Proceedings of the 29th IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2818‒2826, (2016).
  31.  C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inceptionv4, Inception-ResNet and the impact of residual connections on learning”, Proceedings of 31st Association for the Advancement of Artificial Intelligence on Artificial Intelligence (AAAI), 1‒12, (2016).
  32.  P.N. Tan, M. Steinbach, and V. Kumar: Introduction to Data Mining, Pearson Education, Boston, 2006.
  33.  H. Moch, A.L. Cubilla, P.A. Humphrey, V.E. Reuter, and T.M. Ulbright, “The 2016 WHO Classification of Tumours of the Urinary System and Male Genital Organs-Part A: Renal, Penile, and Testicular Tumours”, European Urology, 70(1), 93‒105, (2016).
  34.  T. Gudbjartsson et al., “Renal oncocytoma: a clinicopathological analysis of 45 consecutive cases”, BJU International 96(9), 1275‒1279, (2005).
Go to article

Authors and Affiliations

Aleksandra Maria Osowska-Kurczab
1
ORCID: ORCID
Tomasz Markiewicz
1 2
ORCID: ORCID
Miroslaw Dziekiewicz
2
Malgorzata Lorent
2

  1. Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warsaw, Poland
  2. Military Institute of Medicine, ul. Szaserów 128, 04-141 Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

This work presents an automatic system for generating kidney boundaries in computed tomography (CT) images. This paper presents the main points of medical image processing, which are the parts of the developed system. The U-Net network was used for image segmentation, which is now widely used as a standard solution for many medical image processing tasks. An innovative solution for framing the input data has been implemented to improve the quality of the learning data as well as to reduce the size of the data. Precision-recall analysis was performed to calculate the optimal image threshold value. To eliminate false-positive errors, which are a common issue in segmentation based on neural networks, the volumetric analysis of coherent areas was applied. The developed system facilitates a fully automatic generation of kidney boundaries as well as the generation of a three-dimensional kidney model. The system can be helpful for people who deal with the analysis of medical images, medical specialists in medical centers, especially for those who perform the descriptions of CT examination. The system works fully automatically and can help to increase the accuracy of the performed medical diagnosis and reduce the time of preparing medical descriptions.
Go to article

Bibliography

  1.  Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition”, Proc. IEEE 86(11), 2278‒2324 (1998), doi: 10.1109/5.726791.
  2.  F. Isensee, “An attempt at beating the 3D U-Net”, ed. K.H. Maier-Hein, 2019.
  3.  Ö. Çiçek, A. Abdulkadir, S.S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation”, in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016), 424‒432, Springer International Publishing, 2016.
  4.  C. Li, W. Chen, and Y. Tan, “Render U-Net: A Unique Perspective on Render to Explore Accurate Medical Image Segmentation”, Appl. Sci. 10(18), 6439 (2020), doi: 10.3390/app10186439.
  5.  Z. Fatemeh, S. Nicola, K. Satheesh, and U. Eranga, “Ensemble U‐net‐based method for fully automated detection and segmentation of renal masses on computed tomography images”, Med. Phys. 47(9), 4032‒4044 (2020), doi: 10.1002/mp.14193.
  6.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation”, ArXiv, abs/1505.04597, 2015.
  7.  M.E.J. Ferlay, F. Lam, M. Colombet, L. and Mery. “Global Cancer Observatory: Cancer Today.” [Online] Available: https://gco.iarc.fr/ today, accessed (accessed).
  8.  P.A. Humphrey, H. Moch, A.L. Cubilla, T. M. Ulbright, and V.E. Reuter, “The 2016 WHO Classification of Tumours of the Urinary System and Male Genital Organs-Part B: Prostate and Bladder Tumours”, Eur. Urol. 70(1), 106‒119 (2016), doi: 10.1016/j.eururo.2016.02.028.
  9.  D.L. Pham, C. Xu, and J.L. Prince, “Current Methods in Medical Image Segmentation”, Ann. Rev. Biomed. Eng. 2(1), 315‒337 (2000), doi: 10.1146/annurev.bioeng.2.1.315.
  10.  B. Tsagaan, A. Shimizu, H. Kobatake, and K. Miyakawa, “An Automated Segmentation Method of Kidney Using Statistical Information”, in Medical Image Computing and Computer-Assisted Intervention — MICCAI 2002, pp. 556‒563, Springer Berlin Heidelberg, 2002.
  11.  J.C. Bezdek, “Objective Function Clustering”, in Pattern Recognition with Fuzzy Objective Function Algorithms , pp. 43‒93, Boston: Springer US, 1981.
  12.  K. Sharma et al., “Automatic Segmentation of Kidneys using Deep Learning for Total Kidney Volume Quantification in Autosomal Dominant Polycystic Kidney Disease”, Sci. Rep. 7(1), 2049 (2017), doi: 10.1038/s41598-017-01779-0.
  13.  P. Jackson, N. Hardcastle, N. Dawe, T. Kron, M.S. Hofman, and R. J. Hicks, “Deep Learning Renal Segmentation for Fully Automated Radiation Dose Estimation in Unsealed Source Therapy”, Front. Oncol. 14(8), 215, (2018), doi: 10.3389/fonc.2018.00215.
  14.  C. Li, W. Chen, and Y. Tan, “Point-Sampling Method Based on 3D U-Net Architecture to Reduce the Influence of False Positive and Solve Boundary Blur Problem in 3D CT Image Segmentation”, Appl. Sci. 10(19), 6838 (2020).
  15.  A. Myronenko and A. Hatamizadeh, “3d kidneys and kidney tumor semantic segmentation using boundary-aware networks”, arXiv preprint arXiv:1909.06684, 2019.
  16.  W. Zhao, D. Jiang, J. P. Queralta, and T. Westerlund, “Multi-Scale Supervised 3D U-Net for Kidneys and Kidney Tumor Segmentation”, arXiv preprint arXiv:2004.08108, 2020.
  17.  W. Zhao, D. Jiang, J. Peña Queralta, and T. Westerlund, “MSS U-Net: 3D segmentation of kidneys and tumors from CT images with a multi-scale supervised U-Net”, Inform. Med. Unlocked 19, 100357 (2020), doi: 10.1016/j.imu.2020.100357.
  18.  Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series”, in The handbook of brain theory and neural networks, pp. 255–258, MIT Press, 1998.
  19.  T. Les, T. Markiewicz, M. Dziekiewicz, and M. Lorent, “Kidney Boundary Detection Algorithm Based on Extended Maxima Transformations for Computed Tomography Diagnosis”, Appl. Sci. 10(21), 7512 (2020), doi: 10.3390/app10217512.
  20.  Z. Swiderska-Chadaj, T. Markiewicz, J. Gallego, G. Bueno, B. Grala, and M. Lorent, “Deep learning for damaged tissue detection and segmentation in Ki-67 brain tumor specimens based on the U-net model”, Bull. Pol. Acad. Sci. Tech. Sci. 66(6), 849‒856 (2018).
  21.  W. Wieclawek, “3D marker-controlled watershed for kidney segmentation in clinical CT exams”, Biomed. Eng. Online 17(1), 26 (2018), doi: 10.1186/s12938-018-0456-x.
  22.  T. Les, “Patch-based renal CTA image segmentation with U-Net”, in 2020 IEEE 21st International Conference on Computational Problems of Electrical Engineering (CPEE), Poland, 2020, pp. 1‒4, doi: 10.1109/CPEE50798.2020.9238735.
Go to article

Authors and Affiliations

Tomasz Les
1

  1. Faculty of Electrical Engineering, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warszawa, Poland

This page uses 'cookies'. Learn more