Multi-domain learning by confidence-weighted parameter combination

Mark Dredze, Alex Kulesza, Koby Crammer

Research output: Contribution to journalArticlepeer-review

Abstract

State-of-the-Art statistical NLP systems for a variety of tasks learn from labeled training data that is often domain specific. However, there may be multiple domains or sources of interest on which the system must perform. For example, a spam filtering system must give high quality predictions for many users, each of whom receives emails from different sources and may make slightly different decisions about what is or is not spam. Rather than learning separate models for each domain, we explore systems that learn across multiple domains. We develop a new multi-domain online learning framework based on parameter combination frommultiple classifiers. Our algorithms draw frommulti-task learning and domain adaptation to adapt multiple source domain classifiers to a new target domain, learn across multiple similar domains, and learn across a large number of disparate domains. We evaluate our algorithms on two popular NLP domain adaptation tasks: sentiment classification and spam filtering.

Original languageEnglish (US)
Pages (from-to)123-149
Number of pages27
JournalMachine Learning
Volume79
Issue number1-2
DOIs
StatePublished - May 2010
Externally publishedYes

Keywords

  • Classifier combination
  • Domain adaptation
  • Multi-task learning
  • Online learning
  • Transfer learning

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Multi-domain learning by confidence-weighted parameter combination'. Together they form a unique fingerprint.

Cite this