Page 57 - 2023-Vol19-Issue2
P. 57

53 |                                                              Hashim & Yassin

[13] Z. Huang and D. Chen, “A breast cancer diagnosis             [25] L. Rokach and O. Maimon, “Decision trees,” Data Min.
      method based on vim feature selection and hierarchi-              Knowl. Discov. handbook. Springer, Boston, MA, no. Jan-
      cal clustering random forest algorithm,” IEEE Access,             uary, pp. 165–192, 2005.
      vol. 10, pp. 3284–3293, 2022.
                                                                  [26] M. A. Khan, M. A. Khan Khattk, S. Latif, A. A. Shah,
[14] J. Jumanto, M. F. Mardiansyah, R. N. Pratama, M. F.                M. Ur Rehman, W. Boulila, M. Driss, and J. Ahmad, Vot-
      Al Hakim, and B. Rawat, “Optimization of breast cancer            ing classifier-based intrusion detection for iot networks.
      classification using feature selection on neural network,”        2022.
      Journal of Soft Computing Exploration, vol. 3, no. 2,
      pp. 105–110, 2022.

[15] D. W. H. Wolberg,
      “https://archive.ics.uci.edu/ml/datasets/breast can-
      cer wisconsin (diagnostic),” M.L Repos., 1995.

[16] K. Teh, P. Armitage, S. Tesfaye, D. Selvarajah, and I. D.
      Wilkinson, “Imbalanced learning: Improving classifi-
      cation of diabetic neuropathy from magnetic resonance
      imaging,” PloS one, vol. 15, no. 12, p. e0243907, 2020.

[17] K. Potdar, “A comparative study of categorical variable
      encoding techniques for neural network classifiers,” Int.
      J. Comput. Appl, vol. 175, no. 4, pp. 7–9, 2017.

[18] Q. Al-Tashi, S. J. Abdulkadir, H. M. Rais, S. Mirjalili,
      and H. Alhussian, “Approaches to multi-objective fea-
      ture selection: A systematic literature review,” IEEE
      Access, vol. 8, pp. 125076–125096, 2020.

[19] R. Saidi, W. Bouaguel, and N. Essoussi, “Hybrid fea-
      ture selection method based on the genetic algorithm
      and pearson correlation coefficient,” Machine learning
      paradigms: theory and application, pp. 3–24, 2019.

[20] B. Gierlichs and E. Prouff, “Mutual information analysis:
      a comprehensive study mutual information analysis: a
      comprehensive study,” J. Cryptol, vol. 24, no. 2, pp. 269–
      291, 2011.

[21] A. Alonso-betanzos, “Filter methods for feature selec-
      tion – a comparative study filter methods for feature
      selection . a comparative study,” Int. Conf. Intell. Data
      Eng. Autom. Learn. Springer, Berlin, Heidelb., vol. 4881,
      no. December, pp. 178–187, 2007.

[22] P. Ferreira, D. C. Le, and N. Zincir-Heywood, “Explor-
      ing feature normalization and temporal information for
      machine learning based insider threat detection,” in 2019
      15th International Conference on Network and Service
      Management (CNSM), pp. 1–7, IEEE, 2019.

[23] W. T. Ambrosius, Topics in biostatistics. Springer, 2007.

[24] G. H. Lewes, “Support vector machines for classifica-
      tion,” Effic. Learn. Mach. Apress, Berkeley, CA, no. Jan-
      uary, pp. 39–66, 2015.
   52   53   54   55   56   57   58   59   60   61   62