Skip to main navigation Skip to search Skip to main content

Estimation of convergence rate for multi-regression learning algorithm

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In many applications, the pre-information on regression function is always unknown. Therefore, it is necessary to learn regression function by means of some valid tools. In this paper we investigate the regression problem in learning theory, i.e., convergence rate of regression learning algorithm with least square schemes in multi-dimensional polynomial space. Our main aim is to analyze the generalization error for multi-regression problems in learning theory. By using the famous Jackson operators in approximation theory, covering number, entropy number and relative probability inequalities, we obtain the estimates of upper and lower bounds for the convergence rate of learning algorithm. In particular, it is shown that for multi-variable smooth regression functions, the estimates are able to achieve almost optimal rate of convergence except for a logarithmic factor. Our results are significant for the research of convergence, stability and complexity of regression learning algorithm.

Original languageEnglish
Pages (from-to)701-713
Number of pages13
JournalScience China Information Sciences
Volume55
Issue number3
DOIs
StatePublished - Mar 2012

Keywords

  • covering number
  • entropy number
  • learning theory
  • rate of convergence

Fingerprint

Dive into the research topics of 'Estimation of convergence rate for multi-regression learning algorithm'. Together they form a unique fingerprint.

Cite this