TY - GEN
T1 - Nested hyper-rectangles for exemplar-based learning
AU - Salzberg, Steven
N1 - Publisher Copyright:
© 1989, Springer-Verlag.
PY - 1989
Y1 - 1989
N2 - Exemplar-based learning is a theory in which learning is accomplished by storing points in Euclidean n-space, En. This paper presents a new theory in which these points are generalized to become hyper-rectangles. These hyper-rectangles, in turn, may be nested to arbitrary depth inside one another. This representation scheme is sharply different from the usual inductive learning paradigms, which learn by replacing boolean formulae by more general formulae, or by creating decision trees. The theory is described and then compared to other inductive learning theories. An implementation, Each, has been tested empirically on three different domains: predicting the recurrence of breast cancer, classifying iris flowers, and predicting survival times for heart attack patients. In each case, the results are compared to published results using the same data sets and different machine learning algorithms. Each performs as well as or better than other algorithms on all of the data sets.
AB - Exemplar-based learning is a theory in which learning is accomplished by storing points in Euclidean n-space, En. This paper presents a new theory in which these points are generalized to become hyper-rectangles. These hyper-rectangles, in turn, may be nested to arbitrary depth inside one another. This representation scheme is sharply different from the usual inductive learning paradigms, which learn by replacing boolean formulae by more general formulae, or by creating decision trees. The theory is described and then compared to other inductive learning theories. An implementation, Each, has been tested empirically on three different domains: predicting the recurrence of breast cancer, classifying iris flowers, and predicting survival times for heart attack patients. In each case, the results are compared to published results using the same data sets and different machine learning algorithms. Each performs as well as or better than other algorithms on all of the data sets.
UR - http://www.scopus.com/inward/record.url?scp=85037532122&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85037532122&partnerID=8YFLogxK
U2 - 10.1007/3-540-51734-0_61
DO - 10.1007/3-540-51734-0_61
M3 - Conference contribution
AN - SCOPUS:85037532122
SN - 9783540517344
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 184
EP - 201
BT - Analogical and Inductive Inference - International Workshop, All 1989, Proceedings
A2 - Jantke, Klaus P.
PB - Springer Verlag
T2 - 2nd International Workshop on Analogical and Inductive Inference, AII 1989
Y2 - 1 October 1989 through 6 October 1989
ER -