Errors in Clinical Notes Generated by Speech Recognition Are Not Uncommon

This article originally appeared here.
Share this content:
Approximately 7% of every 100 words in clinical documentation that has been created by speech recognition technology are incorrectly dictated.
Approximately 7% of every 100 words in clinical documentation that has been created by speech recognition technology are incorrectly dictated.

Approximately 7% of every 100 words in clinical documentation that has been created by speech recognition technology are incorrectly dictated, according to a study published in JAMA Network Open. These findings demonstrate the importance of manual review, as well as quality assurance and auditing by human editors.

In a cross-sectional analysis, investigators obtained a collection of 217 clinical documents created by speech recognition, including office notes (n=83), discharge summaries (n=75), and operative notes (n=59). A total of 144 physicians had used speech recognition (Dragon Medical 360 | eScription [Nuance]) to create these notes throughout 2016.

Investigators analyzed the documentation for errors annotated in the speech recognition engine-generated documentation, the transcriptionist-edited document, and the physician's signed note, and compared the documents with the audio recordings and medical records.

Among all speech recognition notes, the error rate was 7.4%, or 7.4 errors for every 100 words. After transcriptionist review, the errors decreased to 0.4%. In addition, signed notes resulted in a further reduction in errors to 0.3%. The majority of speech recognition notes contained errors (96.3%) compared with transcriptionist notes (58.1%) and signed notes (42.4%).

Compared with other documentation types, discharge summaries demonstrated higher mean speech recognition error rates (8.9% vs 6.6%; difference, 2.3%; 95% CI, 1.0%-3.6%; P <.001). Comparatively, speech recognition notes generated by surgeons had a lower mean error rate compared with those of other physicians (6.0% vs 8.1%; difference, 2.2%; 95% CI, 0.8%-3.5%; P =.002).

The study is limited by its inclusion of a relatively small number of notes from limited clinical settings, reducing the findings' generalizability across the entirety of clinical care.

Developing automated methods for detecting and correcting errors in speech recognition-generated documentation is "vital to ensuring the effective use of clinicians' time and to improving and maintaining documentation quality, all of which can, in turn, increase patient safety."

follow @RheumAdvisor

Reference

Zhou L, Blackley SV, Kowalski L, et al. Analysis of errors in dictated clinical documents assisted by speech recognition software and professional transcriptionists. JAMA Network Open. 2018;1(3):e180530.

You must be a registered member of Rheumatology Advisor to post a comment.

Sign Up for Free e-newsletters