Email updates

Keep up to date with the latest news and content from Journal of Biomedical Semantics and BioMed Central.

Open Access Highly Accessed Research

Pooling annotated corpora for clinical concept extraction

Kavishwar B Wagholikar1*, Manabu Torii2, Siddhartha R Jonnalagadda1 and Hongfang Liu1

Author Affiliations

1 Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, MN, USA

2 Department of Radiology, Georgetown University Medical Center, Washington, DC, USA

For all author emails, please log on.

Journal of Biomedical Semantics 2013, 4:3  doi:10.1186/2041-1480-4-3

Published: 8 January 2013

Abstract

Background

The availability of annotated corpora has facilitated the application of machine learning algorithms to concept extraction from clinical notes. However, high expenditure and labor are required for creating the annotations. A potential alternative is to reuse existing corpora from other institutions by pooling with local corpora, for training machine taggers. In this paper we have investigated the latter approach by pooling corpora from 2010 i2b2/VA NLP challenge and Mayo Clinic Rochester, to evaluate taggers for recognition of medical problems. The corpora were annotated for medical problems, but with different guidelines. The taggers were constructed using an existing tagging system MedTagger that consisted of dictionary lookup, part of speech (POS) tagging and machine learning for named entity prediction and concept extraction. We hope that our current work will be a useful case study for facilitating reuse of annotated corpora across institutions.

Results

We found that pooling was effective when the size of the local corpus was small and after some of the guideline differences were reconciled. The benefits of pooling, however, diminished as more locally annotated documents were included in the training data. We examined the annotation guidelines to identify factors that determine the effect of pooling.

Conclusions

The effectiveness of pooling corpora, is dependent on several factors, which include compatibility of annotation guidelines, distribution of report types and size of local and foreign corpora. Simple methods to rectify some of the guideline differences can facilitate pooling. Our findings need to be confirmed with further studies on different corpora. To facilitate the pooling and reuse of annotated corpora, we suggest that – i) the NLP community should develop a standard annotation guideline that addresses the potential areas of guideline differences that are partly identified in this paper; ii) corpora should be annotated with a two-pass method that focuses first on concept recognition, followed by normalization to existing ontologies; and iii) metadata such as type of the report should be created during the annotation process.