#]
#] *********************
#] "$d_References"'Mathematics/RKHS 0 notes.txt' -
www.BillHowell.ca 24Mar2022 initial
To view this file - use a text editor (not word processor) constant width font (eg courrier 10), tab - 3 spaces
48************************************************48
24************************24
# Table of Contents :
# $ grep "^#]" "$d_References"'Mathematics/RKHS 0 notes.txt' | sed 's/^#\]/ /'
24************************24
Issues :
24************************24
08********08
#] ??Mar2022
08********08
#] ??Mar2022
08********08
#] ??Mar2022
08********08
#] ??Mar2022
08********08
#] ??Mar2022
08********08
#] ??Mar2022
08********08
#] 24Mar2022 RKHS history
"$d_References"'Mathematics/RKHS Lorenzo Rosasco 12Feb2007 easy mit.edu class03.pdf'
https://www.mit.edu/~9.520/spring07/Classes/class03_rkhs.pdf
(not in pdf?)
RKHS were explicitly introduced in learning theory by Girosi (1997). Poggio and Girosi (1989) introduced Tikhonov regularization in learning theory and worked with RKHS only implicitly, because they dealt mainly with hypothesis spaces on unbounded domains, which we will not discuss here. Of course, RKHS were used much earlier in approximation theory (eg Wahba, 1990...) and computer vision (eg Bertero, Torre, Poggio, 1988...).
https://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space
Reproducing kernel Hilbert space
The reproducing kernel was first introduced in the 1907 work of Stanisław Zaremba concerning boundary value problems for harmonic and biharmonic functions. James Mercer simultaneously examined functions which satisfy the reproducing property in the theory of integral equations. The idea of the reproducing kernel remained untouched for nearly twenty years until it appeared in the dissertations of Gábor Szegő, Stefan Bergman, and Salomon Bochner. The subject was eventually systematically developed in the early 1950s by Nachman Aronszajn and Stefan Bergman.[4]
These spaces have wide applications, including complex analysis, harmonic analysis, and quantum mechanics. Reproducing kernel Hilbert spaces are particularly important in the field of statistical learning theory because of the celebrated representer theorem which states that every function in an RKHS that minimises an empirical risk functional can be written as a linear combination of the kernel function evaluated at the training points. This is a practically useful result as it effectively simplifies the empirical risk minimization problem from an infinite dimensional to a finite dimensional optimization problem.
...
Moore–Aronszajn theorem
We have seen how a reproducing kernel Hilbert space defines a reproducing kernel function that is both symmetric and positive definite. The Moore–Aronszajn theorem goes in the other direction; it states that every symmetric, positive definite kernel defines a unique reproducing kernel Hilbert space. The theorem first appeared in Aronszajn's Theory of Reproducing Kernels, although he attributes it to E. H. Moore.
Theorem. Suppose K is a symmetric, positive definite kernel on a set X. Then there is a unique Hilbert space of functions on X for which K is a reproducing kernel.
# enddoc