Background/Aims: Clinical trial guidelines are non-specific concerning the recommended frequency, timing and nature of data audits. Source data verification (SDV) is the process of comparing data collected on original documents to case report forms or electronic records. Absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment methods. The aim of this review was to examine previous SDV auditing methods to monitor data quality in a clinical research setting.
Methods: Using MEDLINE, Scopus and Science Direct databases, a systematic literature review of published studies was conducted. Studies were included if they implemented a SDV auditing method and excluded if not available in full-text or English language.
Results: 802 studies were identified and 15 scrutinized. The nature and extent of SDV audit methods varied, depending upon the complexity of the source document, type of study, variables (primary or secondary), amount (2-100%) and frequency (1-24 months) of data collected. Methods implemented to code, classify and calculate error were inconsistent. The main source of error was from transcription errors and experience of data entry personnel. Repeated SDV audits on the same dataset demonstrated improvement over time.
Conclusions: Recommendations in the literature are inconsistent and no method of SDV can be determined as the “gold standard”. Significant variations in procedures, policies, requirements and technologies of audit designs were identified. Clinical trials should employ a method including random samples (~10%), critical and non-critical variables and multiple audits with quality improvement feedback.
Funding source: N/A