Learning with ℓ1-Regularizer Based on Markov Resampling

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Learning with ℓ1-regularizer has brought about a great deal of research in learning theory community. Previous known results for the learning with ℓ1-regularizer are based on the assumption that samples are independent and identically distributed (i.i.d.), and the best obtained learning rate for the ℓ1-regularization type algorithms is O(1/ m), where m is the samples size. This paper goes beyond the classic i.i.d. framework and investigates the generalization performance of least square regression with ℓ1-regularizer (ℓ1-LSR) based on uniformly ergodic Markov chain (u.e.M.c) samples. On the theoretical side, we prove that the learning rate of ℓ1-LSR for u.e.M.c samples ℓ1-LSR(M) is with the order of O(1/m), which is faster than O(1/m) for the i.i.d. counterpart. On the practical side, we propose an algorithm based on resampling scheme to generate u.e.M.c samples. We show that the proposed ℓ1-LSR(M) improves on the ℓ1-LSR(i.i.d.) in generalization error at the low cost of u.e.M.c resampling.

Original languageEnglish
Article number7109869
Pages (from-to)1189-1201
Number of pages13
JournalIEEE Transactions on Cybernetics
Volume46
Issue number5
DOIs
StatePublished - May 2016

Keywords

  • Markov resampling
  • Uniformly ergodic Markov chains (u.e.M.c)
  • learning theory
  • least square regression
  • ℓ regularization

Fingerprint

Dive into the research topics of 'Learning with ℓ1-Regularizer Based on Markov Resampling'. Together they form a unique fingerprint.

Cite this