Distributed robust regression with correntropy losses and regularization kernel networks

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Distributed learning has attracted considerable attention in recent years due to its power to deal with big data in various science and engineering problems. Based on a divide-and-conquer strategy, this paper studies the distributed robust regression algorithm associated with correntropy losses and coefficient regularization in the scheme of kernel networks, where the kernel functions are not required to be symmetric or positive semi-definite. We establish explicit convergence results of such distributed algorithm depending on the number of data partitions, robustness and regularization parameters. We show that with suitable parameter choices the distributed robust algorithm can obtain the optimal convergence rate in the minimax sense, and simultaneously reduce the computational complexity and memory requirement in the standard (non-distributed) algorithms.

Original languageEnglish
Pages (from-to)689-725
Number of pages37
JournalAnalysis and Applications
Volume22
Issue number4
DOIs
StatePublished - 1 May 2024

Keywords

  • Distributed learning
  • correntropy
  • divide-and-conquer strategy
  • kernel spaces
  • robustness estimation

Fingerprint

Dive into the research topics of 'Distributed robust regression with correntropy losses and regularization kernel networks'. Together they form a unique fingerprint.

Cite this