摘要
Information theoretic learning is a learning paradigm that uses concepts of entropies and divergences from information theory. A variety of signal processing and machine learning methods fall into this framework. Minimum error entropy principle is a typical one amongst them. In this paper, we study a kernel version of minimum error entropy methods that can be used to find nonlinear structures in the data. We show that the kernel minimum error entropy can be implemented by kernel based gradient descent algorithms with or without regularization. Convergence rates for both algorithms are deduced.
| 源语言 | 英语 |
|---|---|
| 文章编号 | 105518 |
| 期刊 | Journal of Approximation Theory |
| 卷 | 263 |
| DOI | |
| 出版状态 | 已出版 - 3月 2021 |
| 已对外发布 | 是 |
学术指纹
探究 'Kernel gradient descent algorithm for information theoretic learning' 的科研主题。它们共同构成独一无二的指纹。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver