keyboard_arrow_up
A Simple Process to Speed Up Machine Learning Methods: Application to Hidden Markov Models

Authors

Khadoudja Ghanem, Mentouri University, Algeria

Abstract

An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time grows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model (HMM) training. The idea of the proposed process consists of two steps. The first one involves a preprocessing step to reduce the number of training instances with any information loss. In this step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. In the second step, all formulas in the classical HMM training algorithm (EM) associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods.

Keywords

Machine learning methods, HMM, EM, clustering, time processing.

Full Text  Volume 2, Number 5