Performance analysis of manual and automated systemized nomenclature of medicine (SNOMED) coding

G. W. Moore, J. J. Berman

Research output: Contribution to journalArticlepeer-review

14 Scopus citations


Many pathology departments rely on the accuracy of computer-generated diagnostic coding for surgical specimens. At present, there are no published guidelines to assure the quality of coding devices. To assess the performance of systemized nomenclature of medicine (SNOMED) coding software, manual coding was compared with automated coding in 9353 consecutive surgical pathology reports at the Baltimore Veterans Affairs Medical Center. Manual SNOMED coding produced 13,454 morphologic codes comprising 519 distinct codes; 209 were unique codes (assigned to only one report apiece). Automated coding obtained 23,744 morphologic codes comprising 498 distinct codes, of which 129 were unique codes. Only 44 (.5%) instances were found in which automated coding missed key diagnoses on surgical case reports. Thus, automated coding compared favorably with manual coding. To achieve the maximum performance, departments should monitor the output from automatic coders. Modifications in reporting style, code dictionaries, and coding algorithms can lead to improved coding performance.

Original languageEnglish (US)
Pages (from-to)253-256
Number of pages4
JournalAmerican Journal of Clinical Pathology
Issue number3
StatePublished - 1994
Externally publishedYes


  • Code, MUMPS
  • Pathology
  • Quality assurance
  • Software
  • Translation

ASJC Scopus subject areas

  • Pathology and Forensic Medicine


Dive into the research topics of 'Performance analysis of manual and automated systemized nomenclature of medicine (SNOMED) coding'. Together they form a unique fingerprint.

Cite this