TY - JOUR
T1 - Measuring Quality of Maternal, Neonatal, Child, Reproductive Health and Nutrition Care with tools developed by the RADAR project and tested in Sub Saharan Africa
AU - Marx, Melissa A.
AU - Frost, Emily
AU - Hazel, Elizabeth
AU - Mohan, Diwakar
N1 - Funding Information:
This study was funded by Global Affairs Canada through the Real Accountability Data Analysis for Results (RADAR) project. This work was supported by Global Affairs Canada under Grant number 7061914. We thank the following individuals for making the tool development and testing work possible. Neff Walker, Jennifer Bryce, Melinda Munos, Tim Roberton, Agbessi Amouzou, Robert Black, Emily Wilson, Aveika Akum, Helen Kuo, Bianca Jackson, Anooj Pattniak, Abdoulaye Maïga, Talata Sawadogo-Lewis, Ashley Sheffel, George Mwinnyaa, Joanne Katz, Amos Misomali, Amy Tsui, SK Sreedhara, Xiaomeng Chen, Rosemary Morgan, Brittany Furgal, Jonathan Alcazar, Anna Maria Pacheco Young, Scott Radloff and Stella Babalola Mali: Hamadoun Sangho, Assa Sidibe Keita, Abdoulaye Ongoïba, Maraim Traore Guindo, Haoua Dembele Keita, Boureyma Belem, Ibrahim Terera, Eliane Fanta Traore, Djénéba Coulibaly, Youssouf Keita, Luay Basil, Liliose Niyitegeka, Birahim Yaguemar Gueye Malawi: Fanny Kachale, Andrew Likaka, Henry Phiri, Owen Chikhwaza, Mary Phiri, John Chawawa, Alufeyo Chirwa, Samuel Kapondera, Ndindase Mganga, Nelson Mkandawire, Lily Mwiba, District Health Offices for Chitipa, Dedza, Machinga, Mangochi, Nkhata Bay and Salima, Patrick Msukwa, Peter Mvula, Ephraim Chirwa (post-humous), Khumbo Zonda Tanzania: Abdunoor Mulokozi, Bakar Fakih, Mwifadhi Mrisho, Fatuma, Manzi, Omar Lweno, Frida Ngalesoni, Imani Irema, Abdalla Mkopi, Jerome Mackay, Serafina Mkuwa, Jasmine Vallve Mozambique: Anifa Ismael, Clarissa Teixeira, Sarah MacIndoe, Réka Cane, Júlia Sambo Abchande, Paulino da Costa, Sérgio Mahumane, Granélio Tamele, Jacyra, Claudia da Conceição Xavier, Marla Amaro, Luísa Maringue, Maria Isabel de Sousa Cambe As well as the data collection teams, supervisors and drivers and participants in the evaluations.
Publisher Copyright:
© 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
PY - 2022
Y1 - 2022
N2 - Increasing coverage of evidence-based maternal, neonatal, child, reproductive health and nutrition (MNCRHN) programs in low- and middle-income countries has coincided with dramatic improvements in health despite variable quality of implementation. Comprehensive evaluation to inform program improvement requires standardized but adaptable tools, which the Real Accountability, Data Analysis for Results (RADAR) project has developed. To inform selection of tools and methods packages (‘packages’) to measure program quality of care (QoC), we documented experiences testing the packages, which were developed and adapted based on global and local expertise, and pre- and pilot-testing. We conducted cross-sectional studies in 2018–2019 on the quality of 1) integrated community case management, 2) counseling on maternal, infant, and young child feeding, 3) intrapartum care, and 4) family planning counseling in Mali, Mozambique, Tanzania, and Malawi. Herein we describe package performance and highlight experiences that inform their selection and use. Direct observation packages provided high-quality, immediately applicable results but they required specialized expertise, in-person collection, adequate patient volume, reasonable wait times, and unambiguously ‘correct’ provision of care. General satisfaction questions from exit interview packages produced unvaryingly positive responses despite variable observed quality of care. Variation increased when questions were more targeted, but findings on caregiver and client’s recall of recommendations were more actionable. When interactive, clinical vignettes can capture knowledge of clinical care. But for conditions that can be simulated, like provision of family planning counseling, we could capture provider practice from simulated clients. Clinicians could more easily demonstrate tactile aspects of intrapartum care using observed structured clinical examinations, but this method required storage and transport of the required mannequins. Based on our findings we recommend ten questions upon which evaluators can base package selection. Findings from these packages inform programs and, in the context of comprehensive program evaluation enable us to link programs with impact.
AB - Increasing coverage of evidence-based maternal, neonatal, child, reproductive health and nutrition (MNCRHN) programs in low- and middle-income countries has coincided with dramatic improvements in health despite variable quality of implementation. Comprehensive evaluation to inform program improvement requires standardized but adaptable tools, which the Real Accountability, Data Analysis for Results (RADAR) project has developed. To inform selection of tools and methods packages (‘packages’) to measure program quality of care (QoC), we documented experiences testing the packages, which were developed and adapted based on global and local expertise, and pre- and pilot-testing. We conducted cross-sectional studies in 2018–2019 on the quality of 1) integrated community case management, 2) counseling on maternal, infant, and young child feeding, 3) intrapartum care, and 4) family planning counseling in Mali, Mozambique, Tanzania, and Malawi. Herein we describe package performance and highlight experiences that inform their selection and use. Direct observation packages provided high-quality, immediately applicable results but they required specialized expertise, in-person collection, adequate patient volume, reasonable wait times, and unambiguously ‘correct’ provision of care. General satisfaction questions from exit interview packages produced unvaryingly positive responses despite variable observed quality of care. Variation increased when questions were more targeted, but findings on caregiver and client’s recall of recommendations were more actionable. When interactive, clinical vignettes can capture knowledge of clinical care. But for conditions that can be simulated, like provision of family planning counseling, we could capture provider practice from simulated clients. Clinicians could more easily demonstrate tactile aspects of intrapartum care using observed structured clinical examinations, but this method required storage and transport of the required mannequins. Based on our findings we recommend ten questions upon which evaluators can base package selection. Findings from these packages inform programs and, in the context of comprehensive program evaluation enable us to link programs with impact.
KW - Clinical Vignettes
KW - Direct observation
KW - Exit interviews
KW - Measurement
KW - Program Evaluation
KW - Simulated clients
KW - observed structured clinical examinations
UR - http://www.scopus.com/inward/record.url?scp=85138233529&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85138233529&partnerID=8YFLogxK
U2 - 10.1080/16549716.2021.2006469
DO - 10.1080/16549716.2021.2006469
M3 - Article
C2 - 36098957
AN - SCOPUS:85138233529
SN - 1654-9716
VL - 15
JO - Global health action
JF - Global health action
IS - sup1
M1 - 2006469
ER -