2020-01-25T20:44:49Z
http://oa.upm.es/cgi/oai2
oai:oa.upm.es:44822
2019-06-10T14:35:17Z
7374617475733D707562
7375626A656374733D6D6174656D617469636173
747970653D61727469636C65
K-means algorithms for functional data
López García, María Luz
García-Rodenas, Ricardo
González Gómez, Antonia
Mathematics
Cluster analysis of functional data considers that the objects on which you want to perform a taxonomy
are functions f : X e Rp ↦R and the available information about each object is a sample in a ﬁnite set of points f ¼ fðx ; y ÞA X x Rgn . The aim is to infer the meaningful groups by working explicitly with its inﬁnite-dimensional nature.
In this paper the use of K-means algorithms to solve this problem is analysed. A comparative study of three K-means algorithms has been conducted. The K-means algorithm for raw data, a kernel K-means algorithm for raw data and a K-means algorithm using two distances for functional data are tested. These distances, called dVn and dϕ, are based on projections onto Reproducing Kernel Hilbert Spaces (RKHS) and Tikhonov regularization theory. Although it is shown that both distances are equivalent, they lead to two different strategies to reduce the dimensionality of the data. In the case of dVn distance the most suitable strategy is Johnson–Lindenstrauss random projections. The dimensionality reduction for dϕ is based on spectral methods.
E.T.S.I. Montes (UPM)
http://creativecommons.org/licenses/by-nc-nd/3.0/es/
2015
info:eu-repo/semantics/article
Article
NEUROCOMPUTING, ISSN 0925-2312, 2015, Vol. 151
PeerReviewed
application/pdf
spa
https://www.sciencedirect.com/science/article/pii/S0925231214012521
TRA2011-27791-C03-03
info:eu-repo/semantics/openAccess
info:eu-repo/semantics/altIdentifier/doi/10.1016/j.neucom.2014.09.048
http://oa.upm.es/44822/