Learning theory of distributed spectral algorithms

Research output: Contribution to journalArticlepeer-review

127 Scopus citations

Abstract

Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms.

Original languageEnglish
Article number074009
JournalInverse Problems
Volume33
Issue number7
DOIs
StatePublished - 21 Jun 2017
Externally publishedYes

Keywords

  • distributed learning
  • integral operator
  • learning rate
  • spectral algorithm

Fingerprint

Dive into the research topics of 'Learning theory of distributed spectral algorithms'. Together they form a unique fingerprint.

Cite this