Abstract
Distributed learning has attracted considerable attention in recent years due to its power to deal with big data in various science and engineering problems. Based on a divide-and-conquer strategy, this paper studies the distributed robust regression algorithm associated with correntropy losses and coefficient regularization in the scheme of kernel networks, where the kernel functions are not required to be symmetric or positive semi-definite. We establish explicit convergence results of such distributed algorithm depending on the number of data partitions, robustness and regularization parameters. We show that with suitable parameter choices the distributed robust algorithm can obtain the optimal convergence rate in the minimax sense, and simultaneously reduce the computational complexity and memory requirement in the standard (non-distributed) algorithms.
| Original language | English |
|---|---|
| Pages (from-to) | 689-725 |
| Number of pages | 37 |
| Journal | Analysis and Applications |
| Volume | 22 |
| Issue number | 4 |
| DOIs | |
| State | Published - 1 May 2024 |
Keywords
- Distributed learning
- correntropy
- divide-and-conquer strategy
- kernel spaces
- robustness estimation
Fingerprint
Dive into the research topics of 'Distributed robust regression with correntropy losses and regularization kernel networks'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver