Reliability and agreement during the Rapid Entire Body Assessment: Comparing rater expertise and artificial intelligence.
Journal:
PloS one
PMID:
40343925
Abstract
The purpose of this study was to examine the reliability and agreement between human raters (novice, intermediate, and expert) and TuMeke Risk Suite when assessing work with the Rapid Entire Body Assessment (REBA). Twenty-one videos portraying veterinarians performing an equine radiograph were assessed with REBA by human raters and TuMeke Risk Suite (ergonomic artificial intelligence software). Intra-rater reliability of the final REBA score was highest for TuMeke Risk Suite (ICC = 1.0), then the expert rater (ICC = 0.89 (0.78-0.95)), and lowest for the novice rater (ICC = 0.51 (0.25-0.74)). Agreement between the expert rater and TuMeke Risk Suite was highest for scores of the trunk, leg, and upper arm, and lowest for the neck, wrist, and lower arm. The REBA tool in TuMeke Risk Suite may be of benefit to less experienced users to enhance reliability of their REBA assessments, especially when the trunk, legs, and upper arm are of primary interest.