Page 96 - IJEEE-2023-Vol19-ISSUE-1
P. 96

92 |                                                                                Alkabool, Abdullah, Zadeh, & Mahfooz

 Long-Document Transformer,” arXiv:2004.05150, 2020,         [15] raghaven drakotala, “Fine-tunned on roberta-base as ner
 [Online]. Available: http://arxiv.org/abs/2004.05150.        problem [0.533],” Kaggle, Dec 2021. [Online]. Avail able:
[7] J. Burstein, D. Marcu, S. Andreyev, and M. Chodorow,      https://www.kaggle.com/code/raghavendrakotala/finetunn
 “Towards automatic classification of discourse elements in   ed-on-roberta-base-as-nerproblem-0-533
 essays,” ACL '01: Proceedings of the 39th Annual Meeting    [16] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L.
 on Association for Computational Linguistics, pp. 98–105,    Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin,
 July 2001. https://doi.org/10.3115/1073012.1073026           “Attention is all you need,” Advances in neural information
[8] A. H. Mohammed and A. H. Ali, “Survey of BERT             processing systems, vol. 30, 2017.
 (Bidirectional Encoder Representation Transformer)          [17] Huggingface, “Models,” Apr 2022. [Online]. Available:
 types,” J. Phys. Conf. Ser., vol. 1963, no. 1, 2021, doi:    https://huggingface.co/models
 10.1088/1742-6596/1963/1/012173.                            [18] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I.
[9] W. C. Mann and M. Taboada, “Rhetorical Structure          Sutskever, “Language models are unsupervised multitask
 Theory : looking back and moving ahead,” Discourse Stud.,    learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
 vol. 8, no. 3, pp. 423–459, 2006.                           [19] https://huggingface.co/bert-base-uncased
[10] J. Peller, “Feedback- baseline sentence classifier      [20] D. Rothman," Transformers for Natural Language
 [0.226],” Kaggle, Dec 2021. [Online]. Available:             Processing: Build Innovative Deep Neural Network
https://www.kaggle.com/code/julian3833/feedbackbaseli         Architectures for NLP with Python, PyTorch, TensorFlow,
sentence-classifier-0-226/notebook.                           BERT, RoBERTa, and More," Packt Publishing,2021.
[11] A. Habiby, “Roberta qna model,” Kaggle, Jan 2022.       [21] V. Sanh, L. Debut, J. Chaumond, and T. Wolf,
 [Online]https://www.kaggle.com/code/aliasgherman/robert      “DistilBERT, a distilled version of BERT: smaller, faster,
 a-qnamodel-maxlen-448-stride-192                             cheaper and lighter,” pp. 2–6, 2019, [Online]. Available:
[12] R. Solovyev, W. Wang, and T. Gabruseva, “Weighted        http://arxiv.org/abs/1910.01108.
 boxes fusion: Ensembling boxes from different object        [22] K Yuki, M. Fujiogi, S. Koutsogiannaki. “COVID-19
 detection models,” Image Vis. Comput., vol. 107, p. 104-     pathophysiology: A review”. Clin Immunol.
 117, 2021, doi: 10.1016/j.imavis.2021.104117.                2020;215:108427. doi:10.1016/j.clim.2020.108427.
[13] A. Habiby, “Randomforest only (gradientboostnow),”      [23] https://elitedatascience.com/overfitting-in-machine-
 Kaggle, Jan 2022. [Online]. Available:                       learning.
 https://www.kaggle.com/code/aliasgherman/randomforest       [24] X. Shi, Z. Guo, K. Li, Y. Liang, and X. Zhu, “Self-paced
 only-                                                        Resistance Learning against Overfitting on Noisy Labels,”
[14] Lonnie, “Name entity recognition with                    Pattern Recognit., no. II, p. 109080, 2022.
 keras,” Kaggle, Dec 2021. [Online]                           doi:10.1016/j.patcog.2022.109080.
 https://www.kaggle.com/code/lonnieqin/namentityrecognit
 ion-with-keras
   91   92   93   94   95   96   97   98   99   100   101