keyboard_arrow_up
Distributed Kernel K-Means for Large Scale Clustering

Authors

Marco Jacopo Ferrarotti1, Sergio Decherchi1,2 and Walter Rocchia1, 1Istituto Italiano di Tecnologia, Italy and 2BiKi Technologies, Italy

Abstract

Clustering samples according to an effective metric and/or vector space representation is a challenging unsupervised learning task with a wide spectrum of applications. Among several clustering algorithms, k-means and its kernelized version have still a wide audience because of their conceptual simplicity and efficacy. However, the systematic application of the kernelized version of k-means is hampered by its inherent square scaling in memory with the number of samples. In this contribution, we devise an approximate strategy to minimize the kernel k-means cost function in which the trade-off between accuracy and velocity is automatically ruled by the available system memory. Moreover, we define an ad-hoc parallelization scheme well suited for hybridcpu-gpustate-of-the-art parallel architectures. We proved the effectiveness both of the approximation scheme and of the parallelization method on standard UCI datasets and on molecular dynamics (MD) data in the realm of computational chemistry. In this applicative domain, clustering can play a key role for both quantitively estimating kinetics rates via Markov State Models or to give qualitatively a human compatible summarization of the underlying chemical phenomenon under study. For these reasons, we selected it as a valuable real-world application scenario.

Keywords

Clustering, Unsupervised Learning, Kernel Methods, Distributed Computing, GPU, Molecular Dynamics

Full Text  Volume 7, Number 10