Investigating the effect of randomly selected feature subsets on bagging and boosting

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Bagging, boosting, and random subspace methods are three most commonly used approaches for constructing ensemble classifiers. In this article, the effect of randomly selected feature subsets (intersectant or disjoint) on bagging and boosting is investigated. The performance of the related ensemble methods are compared by conducting experiments on some UCI benchmark datasets. The results demonstrate that bagging can be generally improved using the randomly selected feature subsets whereas boosting can only be optimized in some cases. Furthermore, the diversity between classifiers in an ensemble is also discussed and related to the prediction accuracy of the ensemble classifier.

Original languageEnglish
Pages (from-to)636-646
Number of pages11
JournalCommunications in Statistics: Simulation and Computation
Volume44
Issue number3
DOIs
StatePublished - 1 Jan 2015

Keywords

  • Bagging
  • Boosting
  • Classification tree
  • Ensemble classifier
  • Random subspace

Fingerprint

Dive into the research topics of 'Investigating the effect of randomly selected feature subsets on bagging and boosting'. Together they form a unique fingerprint.

Cite this