TY - JOUR
T1 - The reported validity and reliability of methods for evaluating continuing medical education
T2 - A systematic review
AU - Ratanawongsa, Neda
AU - Thomas, Patricia A.
AU - Marinopoulos, Spyridon S.
AU - Dorman, Todd
AU - Wilson, Lisa M.
AU - Ashar, Bimal H.
AU - Magaziner, Jeffrey L.
AU - Miller, Redonda G.
AU - Prokopowicz, Gregory P.
AU - Qayyum, Rehan
AU - Bass, Eric B.
N1 - Copyright:
Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2008/3
Y1 - 2008/3
N2 - PURPOSE: To appraise the reported validity and reliability of evaluation methods used in high-quality trials of continuing medical education (CME). METHOD: The authors conducted a systematic review (1981 to February 2006) by hand-searching key journals and searching electronic databases. Eligible articles studied CME effectiveness using randomized controlled trials or historic/concurrent comparison designs, were conducted in the United States or Canada, were written in English, and involved at least 15 physicians. Sequential double review was conducted for data abstraction, using a traditional approach to validity and reliability. RESULTS: Of 136 eligible articles, 47 (34.6%) reported the validity or reliability of at least one evaluation method, for a total of 62 methods; 31 methods were drawn from previous sources. The most common targeted outcome was practice behavior (21 methods). Validity was reported for 31 evaluation methods, including content (16), concurrent criterion (8), predictive criterion (1), and construct (5) validity. Reliability was reported for 44 evaluation methods, including internal consistency (20), interrater (16), intrarater (2), equivalence (4), and test-retest (5) reliability. When reported, statistical tests yielded modest evidence of validity and reliability. Translated to the contemporary classification approach, our data indicate that reporting about internal structure validity exceeded reporting about other categories of validity evidence. CONCLUSIONS: The evidence for CME effectiveness is limited by weaknesses in the reported validity and reliability of evaluation methods. Educators should devote more attention to the development and reporting of high-quality CME evaluation methods and to emerging guidelines for establishing the validity of CME evaluation methods.
AB - PURPOSE: To appraise the reported validity and reliability of evaluation methods used in high-quality trials of continuing medical education (CME). METHOD: The authors conducted a systematic review (1981 to February 2006) by hand-searching key journals and searching electronic databases. Eligible articles studied CME effectiveness using randomized controlled trials or historic/concurrent comparison designs, were conducted in the United States or Canada, were written in English, and involved at least 15 physicians. Sequential double review was conducted for data abstraction, using a traditional approach to validity and reliability. RESULTS: Of 136 eligible articles, 47 (34.6%) reported the validity or reliability of at least one evaluation method, for a total of 62 methods; 31 methods were drawn from previous sources. The most common targeted outcome was practice behavior (21 methods). Validity was reported for 31 evaluation methods, including content (16), concurrent criterion (8), predictive criterion (1), and construct (5) validity. Reliability was reported for 44 evaluation methods, including internal consistency (20), interrater (16), intrarater (2), equivalence (4), and test-retest (5) reliability. When reported, statistical tests yielded modest evidence of validity and reliability. Translated to the contemporary classification approach, our data indicate that reporting about internal structure validity exceeded reporting about other categories of validity evidence. CONCLUSIONS: The evidence for CME effectiveness is limited by weaknesses in the reported validity and reliability of evaluation methods. Educators should devote more attention to the development and reporting of high-quality CME evaluation methods and to emerging guidelines for establishing the validity of CME evaluation methods.
UR - http://www.scopus.com/inward/record.url?scp=43249088533&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=43249088533&partnerID=8YFLogxK
U2 - 10.1097/ACM.0b013e3181637925
DO - 10.1097/ACM.0b013e3181637925
M3 - Review article
C2 - 18316877
AN - SCOPUS:43249088533
SN - 1040-2446
VL - 83
SP - 274
EP - 283
JO - Academic Medicine
JF - Academic Medicine
IS - 3
ER -