keyboard_arrow_up
Batch-Stochastic Sub-Gradient Method for Solving Non-Smooth Convex Loss Function Problems

Authors

KasimuJuma Ahmed, Federal Polytechnic Bali, Nigeria

Abstract

Mean Absolute Error (MAE) and Mean Square Error (MSE) are machine learning loss functions that not only estimates the discrepancy between prediction and true label but also guide the optimal parameter of the model.Gradient is used in estimating MSE model and Sub-gradient in estimating MAE model. Batch and stochastic are two of the many variations of sub-gradient method but the former considers the entire dataset per iteration while the latter considers one data point per iteration. Batch-stochastic Sub-gradient method that learn based on the inputted data and gives stable estimated loss value than that of stochastic and memory efficient than that of batch has been developed by considering defined collection of data-point per iteration. The stability and memory efficiency of the method was tested using structured query language (SQL). The new method shows greater stability, accuracy, convergence, memory efficiencyand computational efficiency than any other existing method of finding optimal feasible parameter(s) of a continuous data.

Keywords

Machine learning, Loss function, sub-gradient, Mean Absolute Error(MAE) and Prediction.

Full Text  Volume 13, Number 18