TY - JOUR
T1 - Evaluation and mitigation of cognitive biases in medical language models
AU - Schmidgall, Samuel
AU - Harris, Carl
AU - Essien, Ime
AU - Olshvang, Daniel
AU - Rahman, Tawsifur
AU - Kim, Ji Woong
AU - Ziaei, Rojin
AU - Eshraghian, Jason
AU - Abadir, Peter
AU - Chellappa, Rama
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/12
Y1 - 2024/12
N2 - Increasing interest in applying large language models (LLMs) to medicine is due in part to their impressive performance on medical exam questions. However, these exams do not capture the complexity of real patient–doctor interactions because of factors like patient compliance, experience, and cognitive bias. We hypothesized that LLMs would produce less accurate responses when faced with clinically biased questions as compared to unbiased ones. To test this, we developed the BiasMedQA dataset, which consists of 1273 USMLE questions modified to replicate common clinically relevant cognitive biases. We assessed six LLMs on BiasMedQA and found that GPT-4 stood out for its resilience to bias, in contrast to Llama 2 70B-chat and PMC Llama 13B, which showed large drops in performance. Additionally, we introduced three bias mitigation strategies, which improved but did not fully restore accuracy. Our findings highlight the need to improve LLMs’ robustness to cognitive biases, in order to achieve more reliable applications of LLMs in healthcare.
AB - Increasing interest in applying large language models (LLMs) to medicine is due in part to their impressive performance on medical exam questions. However, these exams do not capture the complexity of real patient–doctor interactions because of factors like patient compliance, experience, and cognitive bias. We hypothesized that LLMs would produce less accurate responses when faced with clinically biased questions as compared to unbiased ones. To test this, we developed the BiasMedQA dataset, which consists of 1273 USMLE questions modified to replicate common clinically relevant cognitive biases. We assessed six LLMs on BiasMedQA and found that GPT-4 stood out for its resilience to bias, in contrast to Llama 2 70B-chat and PMC Llama 13B, which showed large drops in performance. Additionally, we introduced three bias mitigation strategies, which improved but did not fully restore accuracy. Our findings highlight the need to improve LLMs’ robustness to cognitive biases, in order to achieve more reliable applications of LLMs in healthcare.
UR - http://www.scopus.com/inward/record.url?scp=85207162648&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85207162648&partnerID=8YFLogxK
U2 - 10.1038/s41746-024-01283-6
DO - 10.1038/s41746-024-01283-6
M3 - Article
C2 - 39433945
AN - SCOPUS:85207162648
SN - 2398-6352
VL - 7
JO - npj Digital Medicine
JF - npj Digital Medicine
IS - 1
M1 - 295
ER -