Please use this identifier to cite or link to this item: http://dspace.mediu.edu.my:8181/xmlui/handle/1721.1/7291
Title: A Note on Support Vector Machines Degeneracy
Keywords: AI
MIT
Artificial Intelligence
Support Vector Machines
Scale Sensitive Loss Function
Statistical Learning Theory.
Issue Date: 9-Oct-2013
Description: When training Support Vector Machines (SVMs) over non-separable data sets, one sets the threshold $b$ using any dual cost coefficient that is strictly between the bounds of $0$ and $C$. We show that there exist SVM training problems with dual optimal solutions with all coefficients at bounds, but that all such problems are degenerate in the sense that the "optimal separating hyperplane" is given by ${f w} = {f 0}$, and the resulting (degenerate) SVM will classify all future points identically (to the class that supplies more training data). We also derive necessary and sufficient conditions on the input data for this to occur. Finally, we show that an SVM training problem can always be made degenerate by the addition of a single data point belonging to a certain unboundedspolyhedron, which we characterize in terms of its extreme points and rays.
URI: http://koha.mediu.edu.my:8181/xmlui/handle/1721
Other Identifiers: AIM-1661
CBCL-177
http://hdl.handle.net/1721.1/7291
Appears in Collections:MIT Items

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.