Localized algorithms for multiple kernel learning
作者:
Highlights:
•
摘要
Instead of selecting a single kernel, multiple kernel learning (MKL) uses a weighted sum of kernels where the weight of each kernel is optimized during training. Such methods assign the same weight to a kernel over the whole input space, and we discuss localized multiple kernel learning (LMKL) that is composed of a kernel-based learning algorithm and a parametric gating model to assign local weights to kernel functions. These two components are trained in a coupled manner using a two-step alternating optimization algorithm. Empirical results on benchmark classification and regression data sets validate the applicability of our approach. We see that LMKL achieves higher accuracy compared with canonical MKL on classification problems with different feature representations. LMKL can also identify the relevant parts of images using the gating model as a saliency detector in image recognition problems. In regression tasks, LMKL improves the performance significantly or reduces the model complexity by storing significantly fewer support vectors.
论文关键词:Multiple kernel learning,Support vector machines,Support vector regression,Classification,Regression,Selective attention
论文评审过程:Received 20 September 2011, Revised 28 March 2012, Accepted 2 September 2012, Available online 11 September 2012.
论文官网地址:https://doi.org/10.1016/j.patcog.2012.09.002