Kernel gradient descent algorithm for information theoretic learning

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Information theoretic learning is a learning paradigm that uses concepts of entropies and divergences from information theory. A variety of signal processing and machine learning methods fall into this framework. Minimum error entropy principle is a typical one amongst them. In this paper, we study a kernel version of minimum error entropy methods that can be used to find nonlinear structures in the data. We show that the kernel minimum error entropy can be implemented by kernel based gradient descent algorithms with or without regularization. Convergence rates for both algorithms are deduced.

Original languageEnglish
Article number105518
JournalJournal of Approximation Theory
Volume263
DOIs
StatePublished - Mar 2021
Externally publishedYes

Keywords

  • Gradient descent algorithm
  • Information theoretic learning
  • Kernel method
  • Minimum error entropy
  • Regularization

Fingerprint

Dive into the research topics of 'Kernel gradient descent algorithm for information theoretic learning'. Together they form a unique fingerprint.

Cite this