Learning a Maximal Margin Metric
Abstract
We try to learn a Mahalanobis/quadratic distance metric for support vector machines (SVM). The purpose is to learn a linear transformation in the input space such that in the transformed space, the performance of the SVM is optimal. In contrast to neighbourhood component analysis (NCA) where a transformation is learned to make a stochastic nearest neighbour classifier perform best in the transformed space, we replace the nearest neighbour classifier with the state-of-the-art SVM. Better performance over NCA is expected due to SVM\\\’s more promising generalisation capability, especially for the case of small training samples. This replacement rises a more complicated optimisation problem. We also show that many kernel based learning algorithms benefit from the proposed framework.
No comments:
Post a Comment