TY - JOUR
T1 - Methodologically rigorous risk of bias tools for nonrandomized studies had low reliability and high evaluator burden
AU - Jeyaraman, Maya M.
AU - Rabbani, Rasheda
AU - Copstein, Leslie
AU - Robson, Reid C.
AU - Al-Yousif, Nameer
AU - Pollock, Michelle
AU - Xia, Jun
AU - Balijepalli, Chakrapani
AU - Hofer, Kimberly
AU - Mansour, Samer
AU - Fazeli, Mir S.
AU - Ansari, Mohammed T.
AU - Tricco, Andrea C.
AU - Abou-Setta, Ahmed M.
N1 - Publisher Copyright:
© 2020 Elsevier Inc.
PY - 2020/12
Y1 - 2020/12
N2 - Objective: To assess the real-world interrater reliability (IRR), interconsensus reliability (ICR), and evaluator burden of the Risk of Bias (RoB) in Nonrandomized Studies (NRS) of Interventions (ROBINS-I), and the ROB Instrument for NRS of Exposures (ROB-NRSE) tools. Study Design and Setting: A six-center cross-sectional study with seven reviewers (2 reviewer pairs) assessing the RoB using ROBINS-I (n = 44 NRS) or ROB-NRSE (n = 44 NRS). We used Gwet's AC1 statistic to calculate the IRR and ICR. To measure the evaluator burden, we assessed the total time taken to apply the tool and reach a consensus. Results: For ROBINS-I, both IRR and ICR for individual domains ranged from poor to substantial agreement. IRR and ICR on overall RoB were poor. The evaluator burden was 48.45 min (95% CI 45.61 to 51.29). For ROB-NRSE, the IRR and ICR for the majority of domains were poor, while the rest ranged from fair to perfect agreement. IRR and ICR on overall RoB were slight and poor, respectively. The evaluator burden was 36.98 min (95% CI 34.80 to 39.16). Conclusions: We found both tools to have low reliability, although ROBINS-I was slightly higher. Measures to increase agreement between raters (e.g., detailed training, supportive guidance material) may improve reliability and decrease evaluator burden.
AB - Objective: To assess the real-world interrater reliability (IRR), interconsensus reliability (ICR), and evaluator burden of the Risk of Bias (RoB) in Nonrandomized Studies (NRS) of Interventions (ROBINS-I), and the ROB Instrument for NRS of Exposures (ROB-NRSE) tools. Study Design and Setting: A six-center cross-sectional study with seven reviewers (2 reviewer pairs) assessing the RoB using ROBINS-I (n = 44 NRS) or ROB-NRSE (n = 44 NRS). We used Gwet's AC1 statistic to calculate the IRR and ICR. To measure the evaluator burden, we assessed the total time taken to apply the tool and reach a consensus. Results: For ROBINS-I, both IRR and ICR for individual domains ranged from poor to substantial agreement. IRR and ICR on overall RoB were poor. The evaluator burden was 48.45 min (95% CI 45.61 to 51.29). For ROB-NRSE, the IRR and ICR for the majority of domains were poor, while the rest ranged from fair to perfect agreement. IRR and ICR on overall RoB were slight and poor, respectively. The evaluator burden was 36.98 min (95% CI 34.80 to 39.16). Conclusions: We found both tools to have low reliability, although ROBINS-I was slightly higher. Measures to increase agreement between raters (e.g., detailed training, supportive guidance material) may improve reliability and decrease evaluator burden.
KW - Evaluator burden
KW - Interconsensus reliability
KW - Interrater reliability
KW - Nonrandomized studies
KW - ROBINS-I
KW - RoB instrument for NRS of exposures
UR - http://www.scopus.com/inward/record.url?scp=85094324899&partnerID=8YFLogxK
U2 - 10.1016/j.jclinepi.2020.09.033
DO - 10.1016/j.jclinepi.2020.09.033
M3 - Article
C2 - 32987166
AN - SCOPUS:85094324899
SN - 0895-4356
VL - 128
SP - 140
EP - 147
JO - Journal of Clinical Epidemiology
JF - Journal of Clinical Epidemiology
ER -