Publikációs Adatbázis
Állandó link (URI) ehhez a gyűjteményhez
A Publikációs Adatbázis a Debreceni Egyetem kutatói közleményeinek gyűjteménye. A gyűjtemény alapértelmezésben zárt, az egyes dokumentumai kizárólag a szerzői jogok betartásával válhatnak elérhetővé. Az elérés alapja lehet a kiadók copyright szerződése, illetve a hatályos Szerzői jogi törvény közgyűjteményekre vonatkozó rendelkezései (Szjt. 30.§ , Szjt. 38.§ (5), 117/2004. Kormányrendelet).
Böngészés
Publikációs Adatbázis Tárgyszó szerinti böngészés "Adaptive boosting"
Megjelenítve 1 - 2 (Összesen 2)
Találat egy oldalon
Rendezési lehetőségek
Tétel Szabadon hozzáférhető Deep Learning-Based Method for Detecting Cassini-Huygens Spacecraft Trajectory ModificationsAshraf ALDabbas; Zoltán Gál; Aldabbas Ashraf Khaled Abd Elkareem (1983-) (informatikus); Gál Zoltán (1966-) (informatika- és villamosmérnök); Informatikai Rendszerek és Hálózatok Tanszék -- 904; IK; Debreceni EgyetemSupervised learning is a task of machine learning that maps an input to an output based on available data. The data set contains the information from which we get to know the process of classifying at least some of the data correctly. Thus, the classified data is called Training set. In this learning method, the supervision comes from the instances having labels in the training set. Classification problems are mostly referred to come from the branch of supervised learning. Classification is a function of machine learning that finds out the correctly predicted class labels of instances for all the unlabelled instances. In this research, our main focus will be discussing about the most commonly used data classification methods especially Naïve Bayes (NB) and Boosting basically Adaptive Boosting (AdaBoost) classifiers. In preparation for the enhancement of the performances with regards to the accuracy rate of those classifiers we would like to introduce two newly hybrid approaches for classification when the data set is big, noisy and high dimensional. In real life, the data sets usually contain noise or outliers, contradictory instances or missing values. Mostly the data got affected by them during the time of data collection or generation. To resolve that issue, we proposed two new hybrid classifiers. Our first hybrid approach is the ADA+NB classifier, where we used Adaptive Boosting (AdaBoost) classifier to find comparatively more important attribute subsets before the assumption of class conditional independence using Naïve Bayes (NB) classifier [13]. Besides, NB classifier assumes class conditional independence, therefore, it’s possible to multiply the probabilities when the events are independent. Because of that, NB classifier can be very effective in removing examples from the training set before the decision tree (DT) generation at the time of building AdaBoost model. We named this process our second proposed hybrid NB+ADA classifier [14]. This paper investigates the comparison between two classical machine learning approaches with two new hybrid classifiers in terms of accuracy rate, error rate, precision, f-score, sensitivity, specificity analysis on 4 real benchmark data sets who are high dimensional and noisy chosen from UCI (University of California, Irvine) machine learning repository. For instance, Adult data is one of the renowned noisy data set available in UCI. Therefore, we tested AdaBoost classifier which gave us accuracy of 87.65% and NB classifier provided 79.99%. On the other hand, our first proposed (ADA+NB) classifier showed 86.39% and our second proposed (NB+ADA) indicated 94.14% of the accuracy rate with the same data. Similarly, we used other data-sets and derived performance comparison between the classifiers to prove that our proposed classifier’s performances are higher than the text-book classifier’s.Tétel Szabadon hozzáférhető Hybrid AdaBoost and Naïve Bayes Classifier for Supervised LearningAhiya Ahammed; Balazs Harangi; Andras Hajdu; Ahiya Ahammed (2022-) (xxx); Harangi Balázs (1986-) (programtervező matematikus); Hajdu András (1973-) (matematikus, informatikus); Adattudomány és Vizualizáció Tanszék -- 905; IK; Debreceni EgyetemSupervised learning is a task of machine learning that maps an input to an output based on available data. The data set contains the information from which we get to know the process of classifying at least some of the data correctly. Thus, the classified data is called Training set. In this learning method, the supervision comes from the instances having labels in the training set. Classification problems are mostly referred to come from the branch of supervised learning. Classification is a function of machine learning that finds out the correctly predicted class labels of instances for all the unlabelled instances. In this research, our main focus will be discussing about the most commonly used data classification methods especially Naïve Bayes (NB) and Boosting basically Adaptive Boosting (AdaBoost) classifiers. In preparation for the enhancement of the performances with regards to the accuracy rate of those classifiers we would like to introduce two newly hybrid approaches for classification when the data set is big, noisy and high dimensional. In real life, the data sets usually contain noise or outliers, contradictory instances or missing values. Mostly the data got affected by them during the time of data collection or generation. To resolve that issue, we proposed two new hybrid classifiers. Our first hybrid approach is the ADA+NB classifier, where we used Adaptive Boosting (AdaBoost) classifier to find comparatively more important attribute subsets before the assumption of class conditional independence using Naïve Bayes (NB) classifier [13]. Besides, NB classifier assumes class conditional independence, therefore, it’s possible to multiply the probabilities when the events are independent. Because of that, NB classifier can be very effective in removing examples from the training set before the decision tree (DT) generation at the time of building AdaBoost model. We named this process our second proposed hybrid NB+ADA classifier [14]. This paper investigates the comparison between two classical machine learning approaches with two new hybrid classifiers in terms of accuracy rate, error rate, precision, f-score, sensitivity, specificity analysis on 4 real benchmark data sets who are high dimensional and noisy chosen from UCI (University of California, Irvine) machine learning repository. For instance, Adult data is one of the renowned noisy data set available in UCI. Therefore, we tested AdaBoost classifier which gave us accuracy of 87.65% and NB classifier provided 79.99%. On the other hand, our first proposed (ADA+NB) classifier showed 86.39% and our second proposed (NB+ADA) indicated 94.14% of the accuracy rate with the same data. Similarly, we used other data-sets and derived performance comparison between the classifiers to prove that our proposed classifier’s performances are higher than the text-book classifier’s.