Page 26 - IJEEE-2022-Vol18-ISSUE-1
P. 26

22 |                                                                                                                     Atiyah & Ali

paper is divided into various sections. Section I describes the  tumor (WT), and 0.911 for tumor core (TC). When designing
introduction and background of brain tumors. Section II          multi-site and multi-scan MRI acquisitions, researchers used
describes the literature review, Section III describes methods   ntensity normalization to reduce variability. They looked at
and procedures, Section IV shows the experimental results,       using additional data to cope with the diversity in
and Section V describes the conclusion and future work.          geographical location and anatomical makeup of brain
                                                                 tumors. They investigated rotating patches and sampling
                    II. LITERATURE REVIEW                        under-represented HGG classes in LGG. Brain tumor
                                                                 segmentation is still understudied in deep learning
    In 2017, Hao Dong et al.[4] developed a completely           algorithms. They also compared the deep CNN to a surface
automated tumor detection and segmentation system using          architecture with a bigger filter to assess the feasibility of
U-Net architecture. Based on studies utilizing BraTS2015         building a deep architecture with a tiny core. Finally, they
datasets encompassing both patients with low-grade gliomas       verified the importance of the activation leaky rectified linear
(LGG) and high-grade gliomas (HGG), they have proven             unit (LReLU) function in CNN architecture training.
that their technique can yield both competent and vigorous
segmentation. Moreover, the U-Net model may produce                  In 2021, Fabian et al.[9] used neural network nnU-Net in
comparable results for the total tumor tissue and superior       the segmentation task of the BraTS Challenge 2020.
results for the core tumor tissue. The model achieved a dice     Amazing results have been achieved by the basic nnU-Net
coefficient score of 0.86, 0.86, 0.65 for complete, core, and    configuration. Researchers increased the segmentation
enhancing tumors respectively.                                   results of the nnU-Net pipeline by introducing BraTS-
                                                                 specific improvements such as post-processing, region-
    In 2017, Chinmayi et al.[5] developed the Bhattacharya       based training, a more aggressive data augmentation, and a
coefficient, an unsupervised approach for autonomous brain       few minor changes. The model achieved HD95 values of
image segmentation. After preprocessing, an anisotropic          17.337, 8.498, and 17.805 and dice scores of 85.06, 88.95,
diffusion sensor with an 8-connected neighborhood is used        and 82.03 for core, whole, and enhancing tumor,
to the generated MRI images to eliminate noise. The second       respectively.
stage selects sample points for deep learning training using
CNN using the Fast-Bounding Box (FBB) technique. The                 In 2021, Gunasekara et al.[10] proposed an automated
accuracy and similarity index was evaluated. The accuracy        technique for identifying, segmenting, and retrieving precise
of the model is 98.01%, which is higher when compared to         tumor borders from MRI scans 2021. To categorize axial
other related models.                                            MRI into meningioma and glioma brain cancers, the
                                                                 researchers built a 93.6% confident tumor bounding box
    In 2019, Pereira et al.[6] was proposed the new              using a rudimentary CNN architecture with restricted layers.
convolution neural network technology for the MRI segment        Researchers employed the Chan and Vese unsupervised
of brain tumors. Correction of the deviation field, intensity,   adaptive threshold detection technique to obtain accurate
and patch normalization were all part of the preprocessing       tumor boundaries. These metrics were computed by
stage. Later in the training phase, the number of unusual        comparing the border area segmented to the total system
LGG classes was artificially raised by rotating the training     performance. The suggested architecture has a dice Score of
patch and employing HGG samples, resulting in a higher           0.92.
number of training patches.
                                                                                III. METHODS AND PROCEDURES
    In 2020, Hassan Ali Khan et al.[7] used the CNN
approach combined with data enhancement and image                    In this paper, we proposed edge-based segmentation and
processing to categorize malignant and non-cancerous MRI         region-based segmentation using U-Net with the ResNet50
brain images. It removes the black borders and instead just      encoder as a backbone to increase the accuracy of MRI
takes the brain region using open-source computer vision         image segmentation based on human brain tumor disease.
(CV) canny edge detection. Data were also flipped, rotated,      The workflow diagram of the proposed method is depicted
and brightened to increase their number and complexity. The      in Fig. 2. The proposed method comprises data pre-
model was tested on a small dataset and obtained 100%            processing, edge and region detection, and segmentation.
accuracy.                                                        Initially, the pre-processing of the given MRI image is
                                                                 followed by a brain edge or region detection and then the
    In 2020, Xue Feng et al.[8] developed 3D U-Nets for          segmentation that lucidly shows the tumor area.
brain tumor segmentation. The model attained median dice
scores of 0.870 for enhancing tumor (ET), 0.926 for whole

Fig. 2: The Workflow Diagram of The Proposed Scheme.
   21   22   23   24   25   26   27   28   29   30   31