Generalization bounds of ERM algorithm with Markov chain samples

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

One of the main goals of machine learning is to study the generalization performance of learning algorithms. The previous main results describing the generalization ability of learning algorithms are usually based on independent and identically distributed (i.i.d.) samples. However, independence is a very restrictive concept for both theory and real-world applications. In this paper we go far beyond this classical framework by establishing the bounds on the rate of relative uniform convergence for the Empirical Risk Minimization (ERM) algorithm with uniformly ergodic Markov chain samples. We not only obtain generalization bounds of ERM algorithm, but also show that the ERM algorithm with uniformly ergodic Markov chain samples is consistent. The established theory underlies application of ERM type of learning algorithms.

Original languageEnglish
Pages (from-to)223-238
Number of pages16
JournalActa Mathematicae Applicatae Sinica
Volume30
Issue number1
DOIs
StatePublished - Mar 2014

Keywords

  • ERM algorithm
  • generalization bounds
  • learning theory
  • relative uniform convergence
  • uniformly ergodic Markov chain

Fingerprint

Dive into the research topics of 'Generalization bounds of ERM algorithm with Markov chain samples'. Together they form a unique fingerprint.

Cite this