Clinical performance comparators in audit and feedback: a review of theory and evidence

Background Audit and feedback (A&F) is a common quality improvement strategy with highly variable effects on patient care. It is unclear how A&F effectiveness can be maximised. Since the core mechanism of action of A&F depends on drawing attention to a discrepancy between actual and desi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Implementation science : IS Jg. 14; H. 1; S. 39 - 14
Hauptverfasser: Gude, Wouter T., Brown, Benjamin, van der Veer, Sabine N., Colquhoun, Heather L., Ivers, Noah M., Brehaut, Jamie C., Landis-Lewis, Zach, Armitage, Christopher J., de Keizer, Nicolette F., Peek, Niels
Format: Journal Article
Sprache:Englisch
Veröffentlicht: London BioMed Central 24.04.2019
BioMed Central Ltd
Springer Nature B.V
BMC
Schlagworte:
ISSN:1748-5908, 1748-5908
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Background Audit and feedback (A&F) is a common quality improvement strategy with highly variable effects on patient care. It is unclear how A&F effectiveness can be maximised. Since the core mechanism of action of A&F depends on drawing attention to a discrepancy between actual and desired performance, we aimed to understand current and best practices in the choice of performance comparator. Methods We described current choices for performance comparators by conducting a secondary review of randomised trials of A&F interventions and identifying the associated mechanisms that might have implications for effective A&F by reviewing theories and empirical studies from a recent qualitative evidence synthesis. Results We found across 146 trials that feedback recipients’ performance was most frequently compared against the performance of others (benchmarks; 60.3%). Other comparators included recipients’ own performance over time (trends; 9.6%) and target standards (explicit targets; 11.0%), and 13% of trials used a combination of these options. In studies featuring benchmarks, 42% compared against mean performance. Eight (5.5%) trials provided a rationale for using a specific comparator. We distilled mechanisms of each comparator from 12 behavioural theories, 5 randomised trials, and 42 qualitative A&F studies. Conclusion Clinical performance comparators in published literature were poorly informed by theory and did not explicitly account for mechanisms reported in qualitative studies. Based on our review, we argue that there is considerable opportunity to improve the design of performance comparators by (1) providing tailored comparisons rather than benchmarking everyone against the mean, (2) limiting the amount of comparators being displayed while providing more comparative information upon request to balance the feedback’s credibility and actionability, (3) providing performance trends but not trends alone, and (4) encouraging feedback recipients to set personal, explicit targets guided by relevant information.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Review-3
content type line 23
ISSN:1748-5908
1748-5908
DOI:10.1186/s13012-019-0887-1