Distributed Robust Algorithms with Dependent Sampling

  • Baobin Wang
  • , Ting Hu
  • , Liangzhen Lei

Research output: Contribution to journalArticlepeer-review

Abstract

Robust algorithms have been widely used and intensively studied in the communities of engineering, statistics, and machine learning since such algorithms are less sensitive to outliers and effective in addressing the issue of non-Gaussian noise during the learning process. In this paper we study the learning performance of a distributed robust algorithm with mixing dependent samples, where big data are collected distributively and have a dependence structure. Learning rates are derived by means of an integral operator decomposition technique and probability inequalities in Hilbert spaces. The results show that with a suitable robustification parameter, the performance of the distributed robust algorithm is comparable with that of its non-distributed counterpart, even if the dependent feature restricts the availability and the effective amount of data.

Original languageEnglish
Article number3813
JournalMathematics
Volume13
Issue number23
DOIs
StatePublished - Dec 2025

Keywords

  • dependent samples
  • distributed learning
  • integral operator
  • learning rates
  • robustness

Fingerprint

Dive into the research topics of 'Distributed Robust Algorithms with Dependent Sampling'. Together they form a unique fingerprint.

Cite this