Empirical risk minimization (ERM) is a crucial framework that offers a general approach to handling a broad range of machine learning tasks. In this paper, we propose a novel algorithm, called ReHLine, for minimizing a set of regularized ERMs with convex piecewise linear-quadratic loss functions and optional linear constraints. The proposed algorithm can effectively handle diverse combinations of loss functions, regularizations, and constraints, making it particularly well-suited for complex domain-specific problems. Examples of such problems include FairSVM, elastic net regularized quantile regression, Huber minimization, etc. In addition, ReHLine enjoys a provable linear convergence rate and exhibits a per-iteration computational complexity that scales linearly with the sample size. The algorithm is implemented with both Python and R interfaces, and its performance is benchmarked on various tasks and datasets. Our experimental results demonstrate that ReHLine significantly surpasses generic optimization solvers in terms of computational efficiency on large-scale datasets. Moreover, it also outperforms specialized solvers such as Liblinear in SVM, hqreg in Huber minimization, and Lightning (SAGA, SAG, SDCA, SVRG) in smoothed SVM, exhibiting exceptional flexibility and efficiency.
ReHLine is the main hub of ReHLine in Github.
ReHLine-benchmark hosts the benchmark of ReHLine in various ML tasks.
The ReHLine-python hosts the Python version of ReHLine and its documentation.
The ReHLine-r hosts the R version of ReHLine and its documentation.
@inproceedings{daiqiu2023rehline,
title={ReHLine: Regularized Composite ReLU-ReHU Loss Minimization with Linear Computation and Linear Convergence},
author={Dai, Ben and Yixuan Qiu},
booktitle={Advances in Neural Information Processing Systems 37},
year={2023}
}