On comparing classifiers: Pitfalls to avoid and a recommended approach

Research output: Contribution to journalArticlepeer-review

635 Scopus citations

Abstract

An important component of many data mining projects is finding a good classification algorithm, a process that requires very careful thought about experimental design. If not done very carefully, comparative studies of classification and other types of algorithms can easily result in statistically invalid conclusions. This is especially true when one is using data mining techniques to analyze very large databases, which inevitably contain some statistically unlikely data. This paper describes several phenomena that can, if ignored, invalidate an experimental comparison. These phenomena and the conclusions that follow apply not only to classification, but to computational experiments in almost any aspect of data mining. The paper also discusses why comparative analysis is more important in evaluating some types of algorithms than for others, and provides some suggestions about how to avoid the pitfalls suffered by many experimental studies.

Original languageEnglish (US)
Pages (from-to)317-328
Number of pages12
JournalData Mining and Knowledge Discovery
Volume1
Issue number3
DOIs
StatePublished - 1997

Keywords

  • Classification
  • Comparative studies
  • Statistical methods

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'On comparing classifiers: Pitfalls to avoid and a recommended approach'. Together they form a unique fingerprint.

Cite this