Discriminatively regularized least-squares classification

作者:

Highlights:

摘要

Over the past decades, regularization theory is widely applied in various areas of machine learning to derive a large family of novel algorithms. Traditionally, regularization focuses on smoothing only, and does not fully utilize the underlying discriminative knowledge which is vital for classification. In this paper, we propose a novel regularization algorithm in the least-squares sense, called discriminatively regularized least-squares classification (DRLSC) method, which is specifically designed for classification. Inspired by several new geometrically motivated methods, DRLSC directly embeds the discriminative information as well as the local geometry of the samples into the regularization term so that it can explore as much underlying knowledge inside the samples as possible and aim to maximize the margins between the samples of different classes in each local area. Furthermore, by embedding equality type constraints in the formulation, the solutions of DRLSC can follow from solving a set of linear equations and the framework naturally contains multi-class problems. Experiments on both toy and real world problems demonstrate that DRLSC is often superior in classification performance to the classical regularization algorithms, including regularization networks, support vector machines and some of the recent studied manifold regularization techniques.

论文关键词:Classifier design,Discriminative information,Manifold learning,Pattern recognition

论文评审过程:Received 27 March 2008, Revised 15 July 2008, Accepted 16 July 2008, Available online 23 July 2008.

论文官网地址:https://doi.org/10.1016/j.patcog.2008.07.010