Learning With Selected Features

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

The coming big data era brings data of unprecedented size and launches an innovation of learning algorithms in statistical and machine-learning communities. The classical kernel-based regularized least-squares (RLS) algorithm is excluded in the innovation, due to its computational and storage bottlenecks. This article presents a scalable algorithm based on subsampling, called learning with selected features (LSF), to reduce the computational burden of RLS. Almost the optimal learning rate together with a sufficient condition on selecting kernels and centers to guarantee the optimality is derived. Our theoretical assertions are verified by numerical experiments, including toy simulations, UCI standard data experiments, and a real-world massive data application. The studies in this article show that LSF can reduce the computational burden of RLS without sacrificing its generalization ability very much.

Original languageEnglish
Pages (from-to)2032-2046
Number of pages15
JournalIEEE Transactions on Cybernetics
Volume52
Issue number4
DOIs
StatePublished - 1 Apr 2022

Keywords

  • Learning theory
  • Regularized least squares (RLS)
  • Selected features
  • Subsampling
  • Uniqueness set

Fingerprint

Dive into the research topics of 'Learning With Selected Features'. Together they form a unique fingerprint.

Cite this