Adaptive regularization of weight vectors

Koby Crammer, Alex Kulesza, Mark Dredze

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present AROW, a new online learning algorithm that combines several useful properties: large margin training, confidence weighting, and the capacity to handle non-separable data. AROW performs adaptive regularization of the prediction function upon seeing each new instance, allowing it to perform especially well in the presence of label noise. We derive a mistake bound, similar in form to the second order perceptron bound, that does not assume separability. We also relate our algorithm to recent confidence-weighted online learning techniques and show empirically that AROW achieves state-of-the-art performance and notable robustness in the case of non-separable data.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference
PublisherNeural Information Processing Systems
Pages414-422
Number of pages9
ISBN (Print)9781615679119
StatePublished - 2009
Externally publishedYes
Event23rd Annual Conference on Neural Information Processing Systems, NIPS 2009 - Vancouver, BC, Canada
Duration: Dec 7 2009Dec 10 2009

Publication series

NameAdvances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference

Conference

Conference23rd Annual Conference on Neural Information Processing Systems, NIPS 2009
Country/TerritoryCanada
CityVancouver, BC
Period12/7/0912/10/09

ASJC Scopus subject areas

  • Information Systems

Fingerprint

Dive into the research topics of 'Adaptive regularization of weight vectors'. Together they form a unique fingerprint.

Cite this