A Data Manager is asked to manage SOPs for a department. Given equal availability of the following systems, which of the following is the best choice for managing the organizational SOPs?
Document management system
Customized Excel spreadsheet
Learning management system
Existing paper filing system
The best choice for managingStandard Operating Procedures (SOPs)in a compliant and auditable manner is aDocument Management System (DMS).
According to theGCDMP (Chapter: Regulatory Requirements and Compliance)andICH E6 (R2), SOPs must beversion-controlled, securely stored, retrievable, and auditable. Avalidated DMSsupports controlled access, document lifecycle management (draft, review, approval, and archival), and electronic audit trails, ensuring full compliance withFDA 21 CFR Part 11andGood Documentation Practices (GDP).
WhileLearning Management Systems (C)track training, they are not intended for document control.Spreadsheets (B)andpaper systems (D)cannot provide adequate version tracking, access security, or audit capability required for regulatory inspection readiness.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Regulatory Requirements and Compliance, Section 5.2 – SOP Management and Document Control
ICH E6 (R2) GCP, Section 5.5.3 – Document and Record Management
FDA 21 CFR Part 11 – Electronic Records and Signatures, Section 11.10 – System Validation and Document Controls
There is a modification to the CRF and a sudden increase in the number of queries generated in the EDC system. Which action is most likely to reduce the number of queries?
Make some of the existing edit checks manually
Introduce a source data verification process
Review the edit checks for correctness
Have the monitor close the queries
When aCRF modificationleads to a sudden increase inEDC queries, the most likely cause is an error or misconfiguration in theedit checksintroduced during or after the change. Therefore, the first step should be toreview the edit checks for correctness.
TheGCDMP (Chapter: Database Design and Validation)emphasizes that any database or CRF modification should triggerretesting of affected validation rules. Incorrect logic, thresholds, or missing conditional statements in automated edit checks can cause false or redundant queries, leading to unnecessary data management burden and site frustration.
Manually handling edit checks (option A) or adding SDV (option B) does not address the root cause. Having monitors close queries (option D) would mask the problem rather than resolve it.
Thus, the correct corrective measure isOption C — review and validate the edit checksto ensure proper functionality.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Database Design and Validation, Section 5.5 – Edit Check Testing and Review
ICH E6 (R2) GCP, Section 5.5.3 – Validation and Change Control for Electronic Systems
FDA 21 CFR Part 11 – System Validation and Change Documentation
In a physical therapy study, range of motion is assessed by a physical therapist at each site using a study-provided goniometer. Which is the most appropriate quality control method for the range of motion measurement?
Comparison to the measurement from the previous visit
Programmed edit checks to detect out-of-range values upon data entry
Reviewing data listings for illogical changes in range of motion between visits
Independent assessment by a second physical therapist during the visit
In this scenario, the variable of interest—range of motion (ROM)—is aclinically measured, observer-dependent variable. The accuracy and reliability of such data depend primarily on theprecision and consistency of the measurement technique, not merely on data entry validation. Therefore, the most appropriatequality control (QC) methodisindependent verification of the measurement by a second qualified assessor during the visit(Option D).
According to theGood Clinical Data Management Practices (GCDMP, Chapter on Data Quality Assurance and Control), quality control procedures must be tailored to the nature of the data. Forclinically assessed variables, especially those involving human judgment (e.g., physical measurements, imaging assessments, or subjective scoring),real-time verification by an independent qualified assessorensures that data are valid and reproducible at the point of collection. This approach directly addressesmeasurement bias,observer variability, andinstrument misuse, which are primary sources of data error in clinical outcome assessments.
Other options, while valuable, address onlydata consistency or plausibilityafter collection:
Option A (comparison to previous visit)andOption C (reviewing data listings)are retrospective data reviews, suitable for identifying trends but not preventing measurement error.
Option B (programmed edit checks)detects only extreme or impossible values, not measurement inaccuracies due to technique or observer inconsistency.
The GCDMP andICH E6 (R2) Good Clinical Practiceguidelines emphasize that data quality assurance should beginat the source, through standardized procedures, instrument calibration, and dual assessments for observer-dependent measures. Having anindependent second assessorensures inter-rater reliability and provides direct confirmation that the recorded value reflects an accurate and valid measurement.
Reference (CCDM-Verified Sources):
Society for Clinical Data Management (SCDM), Good Clinical Data Management Practices (GCDMP), Chapter: Data Quality Assurance and Control, Section 7.4 – Measurement Quality and Verification
ICH E6 (R2) Good Clinical Practice, Section 2.13 – Quality Systems and Data Integrity
FDA Guidance for Industry: Patient-Reported Outcome Measures and Clinical Outcome Assessment Data, Section 5.3 – Quality Control of Clinician-Assessed Data
SCDM GCDMP Chapter: Source Data Verification and Quality Oversight Procedures
An organization is using an international data exchange standard and a new version is released. Which of the following should be assessed first?
Availability of other standards covering the same content
Existence of backwards compatibility
Content coverage of the new version
Cost of migrating to the new version
When an updated version of adata exchange standard(such as CDISC SDTM, ADaM, or ODM) is released, the first factor that should be assessed isbackwards compatibility. This determines whether the new version can interoperate with or accept data from prior versions without significant reconfiguration or data loss.
According to theGood Clinical Data Management Practices (GCDMP)andCDISC Implementation Guides, assessingbackwards compatibilityensures that historical or ongoing study data remain valid and usable within the updated environment. If the new version introduces structural or semantic changes (such as variable name modifications or controlled terminology updates), it could impact mapping, validation, or regulatory submissions.
Once backward compatibility is confirmed, secondary assessments such ascontent coverage,availability of overlapping standards, andmigration costcan be considered. However, ensuring that the new version supports existing infrastructure and data continuity is thefirst critical stepbefore adoption.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Standards and Data Integration, Section 4.2 – Data Standards Updates and Compatibility Considerations
CDISC SDTM Implementation Guide, Section 1.5 – Backward Compatibility and Version Control
ICH E6(R2) GCP, Section 5.5 – Data Handling and Standardization
Which data are needed to monitor site variability in eligibility screening?
Number of sites with low enrollment
Number of subjects screened and number of subjects enrolled
Number of subjects enrolled
Number of sites with high enrollment
To monitorsite variability in eligibility screening, you must analyzethe number of subjects screenedversusthe number of subjects enrolledat each site. This allows identification of sites that are over- or under-screening relative to their enrollment yield.
TheGCDMP (Chapter: Data Quality Assurance and Metrics)emphasizes thatscreening-to-enrollment ratiosare critical indicators of protocol compliance and data quality. Sites with unusually low conversion rates may have unclear understanding of inclusion/exclusion criteria, requiring targeted training or monitoring.
Other options (A, C, D) provide enrollment metrics but do not revealscreening efficiency or variability, which depend on bothscreening and enrollment data.
Thus,option Bcorrectly identifies the data necessary for monitoring eligibility screening performance across sites.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Quality Assurance and Metrics, Section 5.4 – Site Performance Metrics
ICH E6(R2) GCP, Section 5.18 – Monitoring and Site Oversight Requirements
A study team member wants to let sites enroll patients before the system is ready. Which are important considerations?
Without the ability to capture the data electronically, the data cannot be checked or used to monitor and manage the study
If the study were audited, enrolling subjects prior to having the EDC system ready would become an audit finding
There is no way to identify, report and track adverse events and serious adverse events without the EDC system in place
Starting the study prior to the EDC system being ready will delay processing of milestone-based site payments
Enrolling subjects before theElectronic Data Capture (EDC) systemis ready poses majordata integrity and compliance risks. The primary issue is thatdata cannot be accurately captured, validated, or monitoredwithout the system in place.
Per theGCDMP (Chapter: Data Management Planning and Study Start-up), data collection systems must befully validated, tested, and releasedbefore enrollment begins to ensure:
Real-time data entry and quality control
Proper tracking of adverse events (AEs/SAEs)
Audit trails and traceability for regulatory compliance
Option A highlights the most critical consequence — without an operational EDC, data collection and verification processes cannot occur, compromising data quality and study oversight.
While options B, C, and D may be partially true, they aresecondary effects. The fundamental consideration is data capture capability and monitoring control, makingoption Acorrect.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Management Planning and Study Start-up, Section 4.2 – EDC Readiness and System Validation
ICH E6(R2) GCP, Section 5.5.3 – Computerized Systems Validation Before Use
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.1 – System Qualification Prior to Data Entry
A Data Manager is designing a report to facilitate discussions with sites regarding late data. Which is the most important information to display on the report to encourage sites to provide data?
Number of forms entered in the last week
Expected versus actual forms entered
List of outstanding forms
Total number of forms entered to date
In managingsite data timeliness, the most actionable and effective tool is areport listing all outstanding (missing or incomplete) CRFs.
According toGCDMP (Chapter: Communication and Study Reporting), Data Managers must providesite-level performance reportshighlighting:
Outstanding CRFs not yet entered,
Unresolved queries, and
Pending data corrections.
Such reports help sites prioritize and address data gaps efficiently.
Option AandDare historical metrics without actionable context.
Option Bgives a general overview but lacks specific site-level actionability.
Hence,option C (List of outstanding forms)provides the clearest and most motivating feedback to sites for timely data entry and query resolution.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Communication and Study Reporting, Section 5.3 – Data Timeliness and Reporting Metrics
ICH E6(R2) GCP, Section 5.1.1 – Sponsor Oversight and Data Communication Requirements
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.5 – Site-Level Data Timeliness Reporting
All of the following are preparation processes the data manager needs to take prior to database closure EXCEPT:
Checking for uncoded terms in all panels that are coded.
Ensuring all data expected for the study has been received.
Performing SAE reconciliation between the clinical and safety databases.
Ensuring study close out visits have been complete.
Beforedatabase lock, the Data Manager must confirm that all collected data are complete, validated, and reconciled across systems. This includes:
Ensuring data completeness (B)— confirming all expected forms and data files have been received.
Verifying coded data (A)— ensuring no pending terms remain in coding dictionaries like MedDRA or WHO Drug.
Performing SAE reconciliation (C)— cross-checking the clinical database against the safety system for accuracy.
However,ensuring study close-out visits (D)isnot a data management function; it falls underclinical operationsandmonitoring responsibilities. While data management may review confirmation of site close-outs, the activity itself is not part of pre-database lock procedures.
Therefore,option Dcorrectly identifies the exception—an activity outside the data manager’s direct scope of responsibility before database closure.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Database Lock and Archiving, Section 5.3 – Pre-Lock Validation and Reconciliation Activities
ICH E6(R2) GCP, Section 5.5.3 – Data Handling and Quality Control Prior to Lock
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.1 – Database Management and Lock Procedures
Query rules were tested with test data for each logic condition within each rule. Which of the following types of testing was conducted?
User box testing
White box testing
Black box testing
T box testing
Testing query rules withtest data inputs to confirm expected outputswithout examining the underlying program logic is an example ofblack box testing.
According to theGCDMP (Chapter: Data Validation and System Testing), black box testing is afunctional testing approachused to verify that the system performs correctly from the end-user’s perspective. In this method, testers input various conditions and observe outputs to ensure the system behaves as intended — for instance, that edit checks trigger correctly when data fall outside predefined limits.
In contrast,white box testinginvolves examining internal logic, code, and algorithm structures. Because data managers typically validate edit checks through data-driven test cases rather than code inspection,black box testingis the appropriate and industry-standard method. This ensures compliance with validation documentation standards as outlined inFDA 21 CFR Part 11, Section 11.10(a)andICH E6 (R2)system validation expectations.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Database Validation and Testing, Section 4.1 – Testing Approaches (Black Box and White Box)
FDA 21 CFR Part 11 – System Validation Requirements
ICH E6 (R2) GCP, Section 5.5.3 – Computerized Systems Validation
It has been identified that ten adverse events were not reported in the trial prior to the database lock. What action should be taken to determine the next step?
Get the AE data entered immediately so the database can be locked again.
Evaluate the potential effect of the omission on the validity of the safety and efficacy analysis.
Notify upper management immediately so the monitor can contact the site.
Check the data from all sites again before relocking the database.
When adverse events (AEs) are discovered after adatabase lock, the appropriate first step is toevaluate the impactof the missing data on theintegrity, safety analysis, and regulatory validityof the study results.
According toGCDMP (Chapter: Data Quality Assurance and Control), any post-lock data discovery requires aroot cause assessment and impact analysisbefore deciding whether to unlock the database. The key question is whether the missing AEs:
Affectprimary safety endpoints,
Introducebiasin safety reporting, or
Alterefficacy conclusions.
Based on the assessment, the Data Management and Biostatistics teams determine if unlocking and correction are justified. Simply entering data immediately (A) or repeating checks (D) without analysis may violate data control procedures.
Hence,option Bis correct — the first step is to assess theimpact on data validity and analysis.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Quality Assurance and Control, Section 5.5 – Post-Lock Findings and Impact Assessment
ICH E6(R2) GCP, Section 5.1.1 – Quality Management and Risk Assessment
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.5 – Post-Lock Data Management
A statistician analyzes data from a randomized, double-blind, placebo-controlled study and finds that the placebo outperformed the investigational product. Which of the following is a plausible explanation for this?
The placebo was intended to contain medicinal properties.
Sites appropriately dispensed the investigational product to the subjects.
The treatment codes were incorrectly entered into the database.
The investigational product performed well in this study population.
In arandomized, double-blind, placebo-controlled study, if statistical analysis shows that theplacebo appears to outperform the investigational product, a likely cause is adata management or coding error, particularly intreatment code entry or mapping.
According to theGCDMP (Chapter: Database Design and Build), treatment assignment data — typically stored in randomization or code-break files — must beaccurately integratedinto the clinical database. Any mismatch between randomization codes, subject identifiers, or treatment arms can lead to incorrect grouping during analysis, producing false conclusions such as placebo superiority.
The Data Manager should initiate aroot cause reviewof randomization data integration and treatment mapping. The placebo is never designed to have active medicinal effects (option A). Option D is incorrect because the described scenario implies a data inconsistency, not true efficacy differences. Proper verification of randomization coding and reconciliation between data management and statistical programming systems are essential.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Database Design and Build, Section 6.1 – Randomization and Treatment Code Management
ICH E6 (R2) GCP, Section 5.5.3 – Data Verification and Coding Accuracy
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations – Data Mapping and Validation Requirements
In an EDC study, an example of an edit check that would be inefficient to run at data entry is a check:
Against a valid list of values.
Across visits for consistency.
Against a valid numeric range.
On the format of a date.
InElectronic Data Capture (EDC)systems, edit checks are categorized based on when and how they are executed — typicallyimmediate (at data entry)orbatch (post-entry). Checks that require data frommultiple visits or formsare generallyinefficient to run at data entrybecause they depend on information that may not yet exist in the system.
According to theGood Clinical Data Management Practices (GCDMP, Chapter: Data Validation and Cleaning),cross-visit consistency checks— such as comparing baseline and follow-up blood pressure or verifying date order between screening and dosing — should be executed asbatch or scheduled validations, not at the point of data entry. Running these complex checks in real time can slow system performance, increase query load unnecessarily, and confuse site users if related data are not yet entered.
Conversely, edit checks against valid ranges, formats, or predefined value lists (options A, C, and D) are simple, local validations ideally performed immediately at data entry to prevent basic errors.
Therefore,cross-visit consistency checks(Option B) are best executed later, making theminefficient for real-time data entry validation.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Validation and Cleaning, Section 6.4 – Real-Time vs. Batch Edit Checks
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations – Section on Edit Checks and Data Validation Logic
CDISC SDTM Implementation Guide – Section on Temporal Data Consistency Validation
Which of the following laboratory findings is a valid adverse event reported term that facilitates auto coding?
Elevated HDL
ALT
Abnormal SGOT
Increased alkaline phosphatase, increased SGPT, increased SGOT, and elevated LDH
When coding adverse events (AEs) usingMedDRA (Medical Dictionary for Regulatory Activities), valid AE terms must correspond to specific, medically meaningful concepts thatmatch directly to a Preferred Term (PT)orLowest Level Term (LLT)in the dictionary.
Among the options,“Elevated HDL”(High-Density Lipoprotein) represents a single, medically interpretable, and standard term that can directly match to a MedDRA LLT or PT. This makes it suitable forauto-coding, where the system automatically maps verbatim terms to MedDRA entries without manual intervention.
In contrast:
ALT (B)andAbnormal SGOT (C)are incomplete or nonspecific; they describe test names or qualitative interpretations rather than events.
Option Dlists multiple findings, making it too complex for automatic mapping. Such compound entries would requiremanual coding review.
According toGCDMP (Chapter: Medical Coding and Dictionaries), a valid AE term should be:
Clinically interpretable(not just a lab test name)
Unambiguous
Single-concept based, not a collection of results
Thus,option A (Elevated HDL)is correct, as it aligns with MedDRA’s single-concept, standard terminology structure suitable for auto-coding.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Medical Coding and Dictionaries, Section 5.3 – Auto-coding and Verbatim Term Management
ICH M1 MedDRA Term Selection: Points to Consider, Section 2.1 – Coding Principles
ICH E2B(R3) – Clinical Safety Data Management: Data Elements for Transmission of Individual Case Safety Reports
The serious adverse event (SAE) database should be reconciled against the clinical trial database prior to which occasion?
Case report form data entry
Expedited safety reporting
Database quality audit
Database closure or locking
SAE reconciliationmust becompleted before database lock or closureto ensure all safety data are consistent between theclinical databaseand thepharmacovigilance (safety) database.
According to theGCDMP (Chapter: Safety Data Handling and Reconciliation), SAE reconciliation involves verifying that all adverse events reported in the clinical trial database are also captured and accurately recorded in the safety system (and vice versa). This is essential to confirm thatno SAE is missing, misclassified, or inconsistently dated or codedbetween the two systems.
Performing this reconciliation before database lock ensures that any discrepancies are corrected, and both databases reflect consistent, verified information for regulatory submission. Conducting this after closure (or only at audit time) would risk data inconsistencies in the final submission datasets.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: SAE Reconciliation, Section 6.1 – Timing and Procedures for Reconciliation
ICH E2A/E2F – Clinical Safety Data Management: Definitions and Standards
FDA Guidance for Industry: E2A – Clinical Safety Data Management: Processing Standards for Safety Reports
What is the primary benefit of using a standard dictionary for medications?
To standardize recording of medications taken by patients across sites
To facilitate the reporting and analysis of possible drug interactions
To identify differences in medication components based on country of source
To improve safety monitoring of patients in a clinical trial setting
Theprimary benefitof using astandard medical dictionary(such asWHO Drug Dictionary,WHO-DD Enhanced, orRxNorm) in clinical data management is tostandardize the recording and representation of medicationstaken by study participants across all sites, countries, and data sources (Option A).
According to theGood Clinical Data Management Practices (GCDMP, Chapter on Medical Coding and Dictionaries), standardized coding ensures that all variations of drug names — includingbrand names,generic names,abbreviations, andmisspellings— are consistently mapped to auniform dictionary term. This harmonization allows for accurate aggregation, analysis, and regulatory reporting of concomitant medications and investigational products across multiple studies and global sites.
For example, "Paracetamol" and "Acetaminophen" are the same compound but are known by different names in different regions. Coding both to the samepreferred term (PT)in the WHO Drug Dictionary ensures that all references are analyzed consistently in safety summaries and pharmacovigilance reports.
While other options describe secondary benefits:
Option B:Facilitating drug interaction analysis is an important downstream benefit, but it depends on having standardized coding first.
Option C:Identifying differences in medication components by country is a feature of dictionary metadata but not the primary goal.
Option D:Safety monitoring relies on consistent adverse event and drug data but is an overarching objective, not the direct function of dictionary coding.
Thus, theprimary benefitlies in ensuringconsistency, clarity, and interoperabilityof medication data across all clinical sites and systems, forming the foundation for reliable safety and efficacy analysis.
Reference (CCDM-Verified Sources):
Society for Clinical Data Management (SCDM), Good Clinical Data Management Practices (GCDMP), Chapter: Medical Coding and Dictionaries, Section 6.1 – Purpose and Principles of Coding
WHO Drug Dictionary (WHO-DD) User Manual, Section 2.3 – Standardization of Medicinal Product Terminology
ICH E2B (R3) Clinical Safety Data Management – Data Elements for Transmission of Individual Case Safety Reports
FDA Study Data Technical Conformance Guide, Section 3.2 – Use of Controlled Terminology in Drug and Event Coding
In a cross-functional team meeting, a monitor mentions performing source data verification (SDV) on daily diary data entered by patients on mobile devices. Which of the following is the best response?
All diary data should be source data verified
The diary data should not be source data verified
Diary data to be source data verified should be selected using a risk-based approach
Diary data to be source data verified should be randomly selected
The best response is thatdiary data to be source data verified should be selected using a risk-based approach.
According to theGCDMP (Chapter: Data Quality Assurance and Control)andFDA Guidance on Risk-Based Monitoring (RBM), not all data require full SDV. Electronic patient-reported outcome (ePRO) or mobile diary data are typicallydirect electronic source data (eSource)captured at the time of entry, which already ensures authenticity and traceability.
Arisk-based SDV approachfocuses verification efforts on data critical tosubject safety and primary efficacy endpoints, as defined in the study’sRisk Assessment PlanorMonitoring Plan. Random or full verification of low-risk data (like diary compliance metrics) adds unnecessary effort and cost.
Thus,Option Caligns with current regulatory expectations and data management best practices.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Quality Assurance and Control, Section 7.3 – Risk-Based Monitoring and SDV
ICH E6 (R2) Good Clinical Practice, Section 5.18 – Risk-Based Quality Management
FDA Guidance for Industry: Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring (2013)
What is the main reason 21 CFR Part 11 requires that EDC systems maintain an audit trail?
To preserve data integrity
To preserve the ability for modifications
To preserve source document verifications
To preserve data availability
The primary purpose of maintaining anaudit trailas required under21 CFR Part 11isto preserve data integrity. According to the U.S. FDA’s regulation on electronic records and signatures, every change to electronic data must be traceable, including information about who made the change, when it was made, and what the change entailed.
TheGood Clinical Data Management Practices (GCDMP)outlines that an audit trail provides a permanent, chronological record of all modifications to clinical data. This ensures transparency and allows the reconstruction of the course of data entry and modification. The regulation aims to prevent unauthorized or undocumented data manipulation, thereby maintainingthe accuracy, reliability, and validityof electronic records.
TheFDA 21 CFR Part 11, Section 11.10(e)explicitly mandates that systems must usesecure, computer-generated, time-stamped audit trailsto independently record the date and time of operator entries and actions that create, modify, or delete electronic records. This ensures the data remains trustworthy and defensible in regulatory reviews or inspections.
Therefore, the main reason for requiring an audit trail isto preserve data integrity— ensuring that all data captured, modified, or transmitted is authentic, accurate, and complete throughout the study lifecycle.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Regulatory Compliance and Data Integrity
FDA 21 CFR Part 11 – Electronic Records; Electronic Signatures, Section 11.10(e)
ICH E6 (R2) Good Clinical Practice, Section 5.5.3 – Data Integrity and System Validation
Which information is required by most systems to specify data entry screens?
User role, access level, and permissions
Data type, prompt, and response format
Page number and total number of pages
Help text, review parameters, and answers
When designing or configuringdata entry screenswithin an Electronic Data Capture (EDC) system, three critical components are required for each field:
Data Type– Defines the nature of the data (e.g., text, numeric, date).
Prompt– The label or question displayed to the user.
Response Format– Specifies how the user enters or selects data (e.g., free text, drop-down, checkbox).
According to theGCDMP (Chapter: EDC Systems and Database Design), these three attributes form thelogical data structurerequired to build and validate data entry interfaces. They ensure consistency in how information is captured, displayed, and validated during data entry.
Whileuser roles (A)andhelp text (D)are system-level configurations, not field-level specifications,page numbers (C)relate to printed CRFs rather than digital data screens.
Therefore,option B (Data type, prompt, and response format)correctly identifies the essential information needed to define data entry screens.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: EDC Systems and Database Design, Section 4.3 – Screen Design Specifications
CDISC CDASH Implementation Guide, Section 3.2 – Data Field Attributes
ICH E6(R2) GCP, Section 5.5.3 – Data Capture and Input Standards
When a hospitalized subject in a cardiovascular trial experiences a repeated but mild episode of tachycardia, the physician decides to extend the subject's hospital stay for continued observation. How would this event be characterized?
Serious adverse event
Adverse event
Severe adverse event
Spontaneous adverse event
This event qualifies as aSerious Adverse Event (SAE)because itresulted in a prolonged hospitalization, even though the episode itself was mild.
According toICH E2AandGCDMP (Chapter: Safety Data Handling and Reconciliation), an adverse event is considered“serious”if it results in any of the following outcomes:
Death,
Life-threatening situation,
Hospitalization or prolongation of existing hospitalization,
Persistent or significant disability/incapacity, or
Congenital anomaly/birth defect.
The severity (mild, moderate, severe) describesintensity, while seriousness describesregulatory significance and medical outcome. Thus, a mild tachycardia episode leading to extended hospital stay meets theregulatory definition of an SAE.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Safety Data Handling and Reconciliation, Section 5.2 – Definition and Classification of Serious Adverse Events
ICH E2A – Clinical Safety Data Management: Definitions and Standards for Expedited Reporting, Section II – Seriousness Criteria
FDA 21 CFR 312.32 – IND Safety Reporting: Serious Adverse Event Definitions
A CRF was approved by the Sponsor and development of a clinical database has been started according to the data management plan. What is the next responsibility of the Data Manager?
Prepare a communications plan
Prepare system requirements specification
Plan the timelines to ensure a clinical database is ready before the first screening
Prepare a data validation plan for the clinical database
Once theCase Report Form (CRF)has been finalized and database development has begun, thenext primary responsibility of the Data Manageris to prepare aData Validation Plan (DVP)for the clinical database.
According to theGCDMP (Chapter: Database Design and Build), the DVP documents all planned validation procedures — includingedit checks, cross-form validations, discrepancy management workflows, and system testing requirements. This ensures that data entry, processing, and cleaning are consistent with protocol requirements and that the database will produce reliable, auditable data for analysis.
Whilesystem requirement specifications (option B)are prepared before database development begins, andtimeline planning (option C)occurs during the study startup phase, theDVPis the critical next step post-CRF approval to define and validate system logic before user acceptance testing (UAT).
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Database Design and Build, Section 6.4 – Data Validation Plan (DVP) Development
ICH E6 (R2) GCP, Section 5.5.3 – Validation of Computerized Systems
FDA 21 CFR Part 11 – System Validation Requirements for Electronic Records
The Scope of Work would answer which of the following information needs?
To look up which visit PK samples are taken
To look up the date of the next clinical monitoring visit for a specific site
To determine the number of database migrations budgeted for a project
To find the name and contact information of a specific clinical data associate
TheScope of Work (SOW)is a contractual document that outlines thespecific deliverables, responsibilities, timelines, and budgetary detailsfor a given project between the sponsor and the contract research organization (CRO).
According to theGood Clinical Data Management Practices (GCDMP, Chapter: Project Management and Communication), the SOW defineswhat work will be performed,how many resources are allocated, andthe expected deliverables. This includes detailed information such as:
The number of database builds or migrations,
Timelines for deliverables (e.g., database lock),
Responsibility distribution between sponsor and CRO, and
Budget parameters for defined activities.
Therefore, if a Data Manager needs to determinehow many database migrations are budgeted for a project, theSOWis the correct document to reference.
Information such as PK sample scheduling (option A), site monitoring dates (option B), or staff contact details (option D) would be found in operational plans or contact lists, not in the SOW.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Project Management and Communication, Section 6.2 – Scope of Work Definition and Deliverables
ICH E6 (R2) GCP, Section 5.5.3 – Documentation and Responsibilities for Data Management Tasks
FDA Guidance for Industry: Oversight of Clinical Investigations – Sponsor and CRO Agreements
Which method would best identify clinical chemistry lab data affected by a blood draw taken distal to a saline infusion?
Abnormally high sodium values in a dataset
Lab values from a blood draw with a very high sodium and very low other values
Abnormally low urine glucose values in a dataset
Lab values from a blood draw with a very low sodium and very high other values
If a blood sample is drawndistal (downstream)from a saline infusion site, it may becomecontaminated with saline, leading toabnormal laboratory results. Saline contains a high concentration of sodium chloride, which artificially elevates sodium while diluting other blood components.
Therefore, such samples would display:
Very high sodium levels, and
Abnormally low levelsof other analytes (e.g., proteins, glucose, potassium).
This abnormal pattern (option B) is a classic indicator ofsaline contamination.
Per theGCDMP (Chapter: Data Validation and Cleaning),cross-variable consistency checksare critical for identifying biologically implausible patterns, such as this one, which indicatepre-analytical errorsrather than true physiological changes.
Hence,option Baccurately describes the data signature of a contaminated blood draw.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 6.2 – Logical and Consistency Checks for Laboratory Data
ICH E6(R2) GCP, Section 5.1.1 – Data Quality and Biological Plausibility Checks
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.3 – Detecting Laboratory Anomalies
At a cross-functional study team meeting, a statistician suggests collecting blood gases electronically through the existing continuous hemodynamic monitoring system at sites rather than having a person record the values every five minutes during the study procedure. Assuming that sending, receiving, and integrating these data are possible, what is the best response?
Manual recording is preferred because healthcare devices are not validated to 21 CFR Part 11 standards
Manual recording is preferred because the sites may forget to turn on the machine and lose data
Electronic acquisition is preferable because more data points can be acquired
Electronic acquisition is preferable because the chance for human error is removed
Assuming the data transfer, integration, and validation processes are properly controlled and compliant,electronic acquisitionof clinical data from medical devices is preferred because it allowsmore frequent and accurate data collection, leading to higher data resolution and integrity.
Per theGCDMP (Chapter: Technology and Data Integration), automated data collection minimizes manual transcription and reduces latency in data capture, ensuring both efficiency and completeness. While manual processes introduce human transcription errors and limit frequency, continuous electronic data capture can record thousands of accurate, time-stamped measurements, improving the study’s analytical power.
However,option Dslightly overstates the case — human error isreduced, not entirely eliminated, since setup, calibration, and integration still involve human oversight. Therefore,option Cis the best and most precise response, emphasizing the advantage of more robust and complete data capture.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Technology and Data Integration, Section 5.4 – Automated Data Acquisition and Validation
ICH E6(R2) GCP, Section 5.5.3 – Validation of Computerized Systems and Electronic Data Sources
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.3 – Direct Data Capture from Instruments and Devices
A Clinical Data Manager reads a protocol for a clinical trial to test the efficacy of an antiviral to counteract a new epidemic. The stated primary efficacy endpoint is 3-month survival. Which data element is needed for the primary efficacy endpoint?
Death date
Date of autopsy
Cause of death
Birth date
When theprimary efficacy endpointin a clinical trial is3-month survival, the key data element required is thedeath date. This is because the survival endpoint is determined by calculating whether the subject lived or died within a defined time frame from study enrollment or randomization.
According to theGCDMP (Chapter: Data Management Planning and Study Start-up), the Clinical Data Manager (CDM) must identify and ensure the capture of allcritical data elementsnecessary to evaluate the study endpoints. For time-to-event analyses (e.g., survival studies), accurateevent dates (death date)are essential for endpoint derivation and statistical analysis.
Other data elements such as cause of death or date of autopsy (options B and C) may support secondary analyses or safety reviews but are not necessary to determine the survival endpoint itself. Similarly,birth date(option D) contributes to demographic data but is unrelated to the primary efficacy outcome.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Management Planning and Study Start-up, Section 4.4 – Critical Data Identification for Endpoints
ICH E9 – Statistical Principles for Clinical Trials, Section 2.2.3 – Time-to-Event Data Considerations
FDA Guidance for Industry: Clinical Trial Endpoints for Drug Development
What does RACI stand for?
Responsible, Accountable, Contribute, Input
Recommend, Approve, Calibrate, Innovate
Responsibility, Accountability, Consultation, Information
Responsible, Accountable, Consulted, Informed
RACIis aproject management and governance frameworkused to defineroles and responsibilitieswithin a project. Each letter represents a distinct role type:
Responsible (R):The person(s) who perform the work or execute the task.
Accountable (A):The individual ultimately answerable for the task’s completion and success (only one per activity).
Consulted (C):Subject matter experts who provide input or guidance before decisions are made.
Informed (I):Individuals kept up to date on progress or outcomes but not directly involved in execution.
The RACI model ensuresclarity in ownership and accountability, preventing duplication of effort or responsibility confusion. It is a key component of theGCDMP (Chapter: Project Management in Data Management)for ensuring clear delegation and communication within clinical data management teams.
Hence,option Dis correct.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Project Management in Data Management, Section 5.1 – Roles, Responsibilities, and RACI Matrices
Project Management Institute (PMI) Framework – Responsibility Assignment Matrices (RACI)
ICH E6(R2) GCP, Section 5.1.1 – Defined Roles and Quality Oversight Responsibilities
For clinical investigational sites on an EDC trial, which of the following archival options allows traceability of changes made to data?
Storing the computer used at the clinical investigational site
Paper copies of the source documents
PDF images of the final eCRF screens for each patient
ASCII files of the site's data and related audit trails
Regulatory agencies such as theFDAandICHrequire thatelectronic data be retained in a format that preserves audit trails and traceability.
While PDF images (option C) provide a static representation of data, they do not preserve theunderlying audit trail(i.e., who changed what, when, and why). TheASCII data files with corresponding audit trails(option D) provide complete transparency and comply with21 CFR Part 11andGCDMParchival standards.
Option A(storing computers) is unnecessary and impractical, andOption B(paper source documents) are site records, not system archives.
Hence,option Dis correct —ASCII data files with audit trailsmeet traceability and compliance standards.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Database Lock and Archiving, Section 5.4 – Archival Formats and Audit Trail Retention
ICH E6(R2) GCP, Section 5.5.3 – Data Integrity, Audit Trails, and Record Retention
FDA 21 CFR Part 11 – Electronic Records; Audit Trail and Retention Requirements
What action should a data manager take if an investigator retires in the middle of an EDC trial and the replacement does not agree to use EDC for the remainder of the trial?
Notify the project manager and request that the site be closed.
Explore other options for the site with the study team.
Talk with the clinical research associate to identify alternative sites.
Discuss the use of the site's data with the project statistician.
When an investigator retires mid-study and the replacement refuses to use theElectronic Data Capture (EDC)system, thedata managermust not take unilateral action but rathercollaborate with the study teamto explore acceptable solutions.
Per theGCDMP (Chapter: Project Management in Data Management), any deviation from the established data capture method — particularly a change that affects regulatory compliance, data consistency, or site operations — requires a cross-functional assessment. The study team, which includes clinical operations, project management, regulatory affairs, and data management, should evaluate feasible alternatives such as:
Allowing paper CRF entry followed by centralized data transcription,
Retraining site staff on EDC use, or
Temporarily suspending data entry until compliance can be restored.
Immediate site closure (option A) or unilateral decisions by data management (options C and D) violate escalation and communication protocols. Collaborative decision-making ensures continuity, compliance, and data integrity, in line withICH E6 (R2) GCPandFDA 21 CFR Part 11.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Project Management and Communication, Section 5.2 – Handling Site and Investigator Changes
ICH E6 (R2) Good Clinical Practice, Section 4.1 – Investigator Responsibilities
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations – Section on EDC Operations and Site Management
Which is the MOST appropriate flow for EDC set-up and implementation?
CRF “wire-frames” created, CRFs reviewed, CRFs printed, CRFs distributed to sites
Protocol finalized, Database created, Edit Checks created, Database tested, Sites trained
Database created, Subjects enrolled, Database tested, Sites trained, Database released
Database created, Database tested, Sites trained, Protocol finalized, Database released
The correct and compliant sequence forEDC system setup and implementationbegins onlyafter the study protocol is finalized, as all case report form (CRF) designs, database structures, and validation rules derive directly from the finalized protocol.
According toGCDMP (Chapter: EDC Systems Implementation), the proper order is:
Protocol finalized– defines endpoints and data requirements.
Database created– built according to the protocol and CRFs.
Edit checks created– programmed to validate data entry accuracy.
Database tested (UAT)– ensures functionality, integrity, and compliance.
Sites trained and system released– only then can data entry begin.
Option B follows this logical and regulatory-compliant sequence. Other options (A, C, D) are eitherpaper-based workflowsor violateGCP-compliant timelines(e.g., enrolling subjects before database validation).
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Electronic Data Capture (EDC) Systems, Section 5.2 – System Setup and Implementation Flow
ICH E6(R2) GCP, Section 5.5.3 – Computerized Systems Validation and User Training Before Use
FDA 21 CFR Part 11 – Validation and System Release Requirements
Which is the best reason why front-end checks are usually kept minimal, when compared to back-end checks, in a paper-based clinical study?
Data entry staff should be able to enter a value into the database just as it appears in the paper CRF
There is no need to alert the site personnel immediately about a data issue, as the study has happened already
There are approvals required to raise a Data Clarification Form which could take time
Data review can be performed at a later time due to the paper-based studies being smaller in size
Inpaper-based clinical studies,front-end data checks(those performed during data entry) are intentionally kept minimal to ensure thatdata are entered exactly as recorded on the paper CRF. This principle ensuresdata integrityby maintaining fidelity between source and electronic records before any cleaning or edit validation occurs.
TheGCDMP (Chapter: Data Validation and Cleaning)explains that data entry operators should input values as written, even if they appear incorrect or inconsistent, because the purpose of front-end checks is not to interpret but to capture data faithfully. Theback-end edit checks—performed later by data managers—are designed to identify inconsistencies, out-of-range values, or logical errors that require clarification through queries.
This approach separatesdata capturefromdata cleaning, minimizing bias and preserving original investigator input. Hence,option Aaccurately states the rationale for keeping front-end checks minimal in paper-based studies.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 4.2 – Data Entry, Edit Checks, and Query Process
ICH E6(R2) GCP, Section 5.5.3 – Data Handling and System Controls
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.1 – Data Entry and Verification Processes
Data from two sites are combined. One site coded gender as 1 and 2 (for Male and Female, respectively) while the other stored the data as M and F. Which term best describes the mapping?
Two-to-two
One-to-many
Many-to-one
One-to-one
When combining data from two datasets where one uses numeric codes (1 = Male, 2 = Female) and another uses text codes (M, F), each unique value in one dataset corresponds exactly to one unique value in the other.
This relationship is aone-to-one mapping, where each element in one dataset maps directly to a single corresponding element in the other.
1 → M
2 → F
Such mappings ensure consistent data harmonization duringdata integration and standardizationphases, as outlined in theGCDMP (Chapter: Database Design and Integration).
Many-to-one (C)mapping would occur if multiple values (e.g., “Male,” “M,” “Man”) mapped to a single standardized value, which isn’t the case here.
Thus, the mapping isone-to-one, ensuring precise correspondence between both representations of gender data.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Database Design and Build, Section 5.4 – Data Mapping and Harmonization
CDISC SDTM Implementation Guide, Section 5.2 – Controlled Terminology and Mapping Rules
ICH E6(R2) GCP, Section 5.5.3 – Data Integrity and Integration Principles
Which mode of data entry is most commonly used in EDC systems?
Double entry
Blind verification
Single entry
Third party compare
Themost common mode of data entryinElectronic Data Capture (EDC)systems issingle data entry.
According to theGCDMP (Chapter: Electronic Data Capture Systems), EDC systems have built-inedit checks, validation rules, and audit trailsthat ensure data accuracy and integrity at the point of entry. These real-time validation capabilities makedouble data entry(a legacy practice from paper studies) unnecessary.
EDC systems automatically verify data as they are entered by site staff, generating queries for inconsistencies or out-of-range values immediately.Blind verification (option B)andthird-party comparisons (option D)are not standard data entry modes but may be used for specialized reconciliation or external data imports.
Thus,single data entry (Option C)is the industry standard approach, ensuring both efficiency and compliance withFDA 21 CFR Part 11andICH E6 (R2)data integrity requirements.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Electronic Data Capture (EDC) Systems, Section 5.4 – Data Entry and Verification Processes
ICH E6 (R2) Good Clinical Practice, Section 5.5.3 – Computerized Systems and Data Validation
FDA 21 CFR Part 11 – Electronic Records and Electronic Signatures: Validation and Data Entry Requirements
Which of the following SOPs are required for management of an EDC system?
Management of vendors
Measurement of data quality
Maintenance of coding dictionaries
Change control
The most essential Standard Operating Procedure (SOP) formanagement of an Electronic Data Capture (EDC)system isChange Control.
PerGCDMP (Chapter: Computerized Systems and Compliance)andFDA 21 CFR Part 11, any changes made to an EDC system—whether to software configuration, study database design, or system functionality—must followa documented, validated, and auditable change control process. This ensures that:
Modifications are properly authorized, tested, and approved before implementation.
System validation remains intact.
Data integrity, traceability, and regulatory compliance are maintained.
While vendor management (A) and coding maintenance (C) have supporting SOPs,change control (D)ismandatoryfor any system handling regulated clinical data. Measurement of data quality (B) is important but not specifically tied to system management procedures.
Thus,option D (Change control)is the correct answer.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Computerized Systems and Compliance, Section 5.3 – Change Control and System Maintenance
FDA 21 CFR Part 11 – Electronic Records and Electronic Signatures, Section 11.10(a–k)
ICH E6(R2) GCP, Section 5.5.3 – Computerized Systems Validation and Change Documentation
During an inspection to determine appropriate documentation for use of a computerized system, what SOP might the inspector expect to find?
Data management plan
Data backup plan
Statistical analysis plan
Edit specifications
During a regulatory inspection, inspectors expect to find documentedStandard Operating Procedures (SOPs)governing the use, validation, and maintenance ofcomputerized systems, includingdata backup and recovery procedures.
According to theGCDMP (Chapter: Computerized Systems and Compliance)andFDA 21 CFR Part 11, organizations must maintain an SOP that ensuresdata protection against loss, corruption, or unauthorized access. The SOP should describe backup frequency, secure storage, verification of backup integrity, and procedures for data restoration.
While theData Management Plan (A)andEdit Specifications (D)are study-level documents, and theStatistical Analysis Plan (C)focuses on analysis procedures,only a Data Backup Plan (B)constitutes a requiredsystem-level SOPensuring compliance and data continuity.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Computerized Systems and Compliance, Section 5.2 – Data Security, Backup, and Recovery SOPs
FDA 21 CFR Part 11 – Subpart B, Controls for Closed Systems
ICH E6(R2) GCP, Section 5.5.3 – System Security, Data Backup, and Recovery Requirements
QA is conducting an audit on a study for ophthalmology which is ready for lock. Inconsistencies are found between the database and the source. Of the identified fields containing potential data errors, which fields are considered critical for this particular study?
Subject Identifier
Concomitant Medications
Weight
Medical History
In anophthalmology clinical study, data criticality is determined by how directly a data element affectssafety evaluation,efficacy assessment, andregulatory decision-making. According to theGood Clinical Data Management Practices (GCDMP, Chapter on Data Validation and Cleaning), critical data fields are those that:
Have a direct impact on theprimary and secondary endpoints, or
Are essential forsafety interpretation and adverse event causality assessment.
Among the listed options,Concomitant Medications (Option B)are consideredcritical datafor ophthalmology studies. This is because many ocular treatments and investigational products can interact with systemic or topical medications, potentially affectingocular response,intraocular pressure,corneal healing, orvisual function outcomes. Any inconsistency in concomitant medication data could directly influencesafety conclusionsorefficacy interpretations.
Other options, while important, are less critical for this study type:
Subject Identifier (A)is essential for data traceability and audit purposes but is not directly related to safety or efficacy outcomes.
Weight (C)may be relevant in dose-dependent drug trials but is rarely a pivotal variable in ophthalmology, where local administration (eye drops, intraocular injections) is common.
Medical History (D)provides contextual background but does not have the same immediate impact on endpoint analysis as current concomitant treatments that can confound the therapeutic effect or cause ocular adverse events.
PerGCDMPandICH E6 (R2) GCPguidelines, data validation plans must definecritical data fieldsduring study setup, reflecting therapeutic area–specific priorities. For ophthalmology,concomitant medications, ocular assessments (visual acuity, intraocular pressure, retinal thickness, etc.), and adverse eventsare typically designated as critical fields requiring heightened validation, source verification, and reconciliation accuracy before database lock.
Thus, when QA identifies discrepancies between the CRF and source, theConcomitant Medications field (Option B)is the most critical to address immediately to ensure clinical and regulatory data integrity.
Reference (CCDM-Verified Sources):
Society for Clinical Data Management (SCDM), Good Clinical Data Management Practices (GCDMP), Chapter: Data Validation and Cleaning, Section 6.4 – Critical Data Fields and Data Validation Prioritization
ICH E6 (R2) Good Clinical Practice, Section 5.18 – Monitoring and Source Data Verification
FDA Guidance for Industry: Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring, Section 5.3 – Identification of Critical Data and Processes
SCDM GCDMP Chapter: Data Quality Assurance and Control – Therapeutic Area–Specific Data Criticality Examples (Ophthalmology Studies)
Which method would best identify inaccuracies in safety data tables for an NDA?
Compare counts of appropriate patients from manual CRFs to counts in table cells
Compare counts of appropriate patients from line listings of CRF data to counts in table cells
Review the tables to identify any values that look odd
Review the line listings to identify any values that look odd
The best method for identifying inaccuracies in safety data tables prepared for aNew Drug Application (NDA)is tocompare counts of appropriate patients from line listings of CRF data to the counts in table cells.
According to theGCDMP (Chapter: Data Quality Assurance and Control), line listings representraw, patient-level dataextracted directly from the clinical database, whereas summary tables areaggregated outputsused for reporting and submission. Comparing these two sources ensuresdata traceability and accuracy, verifying that tabulated results correctly reflect the underlying patient data.
Manual CRF checks (option A) are less efficient and error-prone, as data entry is typically already validated electronically. Simply reviewing tables or listings for “odd values” (options C and D) lacks the systematic verification necessary for regulatory data integrity.
Thus,comparing line listings to tables (option B)provides a quantitative cross-check between the database and output deliverables, a standard practice in NDA data validation and statistical quality control.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Quality Assurance and Control, Section 5.2 – Validation of Tables, Listings, and Figures (TLFs)
FDA Guidance for Industry: Submission of NDA Safety Data, Section on Data Verification and Accuracy
ICH E6 (R2) GCP, Section 5.5.3 – Validation of Derived Data Outputs
What additional task does the site study coordinator role perform when utilizing an EDC application compared to paper CRF?
Resolving queries
Data entry
Data curation
Medical record abstraction
Inpaper-based trials, site staff (e.g., study coordinators) record data manually on paper Case Report Forms (CRFs), which are later transcribed by data entry personnel into an electronic database.
However, inEDC-based studies, thesite coordinatoris directly responsible forentering data into the EDC system. This eliminates the need for centralized double data entry and shortens data cleaning timelines.
TheGCDMP (Chapter: Electronic Data Capture Systems)states that EDC systems shift certain tasks, includingdata entry, initial query response, and source verification preparation, to the site level. Yet,data entryremains the most significant additional responsibility compared to paper-based studies.
Option A (Query resolution)is performed in both EDC and paper-based systems.
Option C (Data curation)is typically a Data Management function.
Option D (Medical record abstraction)is part of source documentation, not specific to EDC.
Thus,option B (Data entry)is correct — it is the additional site coordinator duty unique to EDC environments.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Electronic Data Capture (EDC) Systems, Section 5.3 – Site Responsibilities and Workflow Changes
ICH E6(R2) GCP, Section 5.5.3 – Data Entry and Role Delegation in Computerized Systems
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.2 – Site-Level Data Entry Controls
Which protocol section most concisely conveys timing of data collection throughout a study?
Study endpoints section
Study schedule of events
Protocol synopsis
ICH essential documents
TheStudy Schedule of Events(SoE) section in the protocol is the most concise and comprehensive representation of thetiming of data collectionthroughout a study.
According to theGood Clinical Data Management Practices (GCDMP, Chapter: Data Management Planning and Study Start-up)andICH E6 (R2) GCP, the SoE outlines what assessments, procedures, and data collections occur at each study visit (e.g., screening, baseline, treatment visits, follow-up). This table is a foundational tool forCRF design,database structure, andedit-check development, ensuring alignment between the protocol and data management systems.
While thestudy endpoints section (A)defineswhatis measured, and theprotocol synopsis (C)summarizes the design, only theschedule of events (B)specifieswhendata collection occurs for each parameter. TheICH essential documents (D)pertain to regulatory documentation, not study visit timing.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Management Planning and Study Start-up, Section 4.1 – Using the Schedule of Events for Database Design
ICH E6 (R2) GCP, Section 6.3 – Trial Design and Schedule of Assessments
FDA Guidance for Industry: Protocol Design and Data Collection Standards
A site study coordinator attempts to make an update in a study database in an EDC system after lock. What occurs?
The old value is replaced in all locations by the new value
The change is approved by the Data Manager before it is applied
The site study coordinator is not able to make the change
The change is logged as occurring after lock
Once a clinical database islocked, it becomesread-only— no further data modifications can be made by any users, including site personnel. This ensures that the data arefinalized, consistent, and auditablefor statistical analysis and regulatory submission.
According to theGCDMP (Chapter: Database Lock and Archiving), the lock process involves freezing the database to prevent accidental or unauthorized changes. After lock, access permissions are restricted, and all edit and update functions are disabled. If any corrections are required post-lock, the database must beunlocked under controlled procedures(with full audit trail documentation).
Thus,option C—The site study coordinator is not able to make the change— correctly reflects standard EDC functionality and regulatory compliance.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Database Lock and Archiving, Section 5.2 – Database Lock Procedures and Controls
ICH E6(R2) GCP, Section 5.5.3 – Data Integrity and Audit Trail Requirements
FDA 21 CFR Part 11 – Controls for Electronic Records and System Lock Functions
Before the EDC system used for the trial is upgraded, what should be the data manager's first task?
Notify the sites of the upgrade
Update the user manual
Assess the impact on the data
Redesign the eCRF
Before implementing anEDC system upgrade, thefirst taskof the Data Manager is toassess the impact on the data.
According to theGCDMP (Chapter: Electronic Data Capture Systems)andFDA 21 CFR Part 11, any system upgrade must undergoimpact assessmentto determine how the change might affectdata integrity, functionality, validation, and ongoing study operations. This assessment ensures that no data are lost, corrupted, or rendered inconsistent during or after the upgrade.
The Data Manager should evaluate:
Potential effects on existing data, edit checks, and reports,
System functionality impacting current workflows, and
Any revalidation requirements.
Only after the impact is understood should the Data Manager proceed to communicate with sites (option A), update documentation (option B), or modify CRFs if required (option D).
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Electronic Data Capture Systems, Section 7.3 – System Upgrades and Change Control
FDA 21 CFR Part 11 – Change Control and Validation Requirements
ICH E6 (R2) Good Clinical Practice, Section 5.5.3 – Change Impact on Data Integrity and System Validation
Which attribute is NOT a characteristic of a standardized data collection element?
An unambiguous definition for the data element
A strictly enforced requirement for the positioning of each data element on a case report form
A standard set of values used to respond to a data collection question
A unique set of data storage metadata, including a variable name and data type
Astandardized data collection elementhas well-defined metadata, consistent naming conventions, and controlled terminology to ensure uniform data collection and interoperability across studies.
Key attributes, as perGCDMPandCDISC standards, include:
Aclear definitionof meaning (A)
Acontrolled set of response values(C)
Metadata specificationslike variable names, formats, and data types (D)
However, thephysical positioningof a data element on a case report form (B) is a matter ofform layout design, not a characteristic of data standardization. While consistent form structure aids usability, it is not part of data standardization or metadata management principles.
Hence,option Bis correct —form positioning is not a standardized data element attribute.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Standards and Data Integration, Section 4.1 – Data Element Standardization
CDISC CDASH Implementation Guide, Section 3.2 – Standardized Data Collection Elements and Metadata
ICH E6(R2) GCP, Section 5.5.3 – Data Handling and Standardization
Which of the following actions is particularly important in merging data from different trials?
Use of a common software platform
Enrollment of investigative sites with similar patient populations
Exclusion of studies that use a cross-over design
Use of a common adverse event dictionary
Whenmerging data from different clinical trials, theuse of a common adverse event (AE) dictionary(such asMedDRAorWHO Drug) is essential to ensure consistency and comparability across datasets.
According to theGCDMP (Chapter: Standards and Data Mapping)andCDISC SDTM Implementation Guide, data integration across studies requires standardized terminology for adverse events, medications, and clinical outcomes. Using the same AE dictionary ensures that similar terms are coded consistently, allowing accurate cross-study analysis, pooled summaries, and safety reporting.
A sharedsoftware platform (option A)is not necessary if data are mapped to standard formats (e.g., CDISC SDTM). Patient population similarity (option B) affects interpretation but not technical data merging. Study design differences (option C) may influence statistical analysis but not data integration mechanics.
Therefore,Option D – Use of a common adverse event dictionary– is the correct and most critical action for consistent multi-study data integration.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Standards and Data Mapping, Section 5.1 – Use of Standardized Coding Dictionaries
CDISC SDTM Implementation Guide, Section 4.3 – Controlled Terminology and Cross-Study Integration
ICH E3 and E2B – Clinical Data Standards and Safety Coding Requirements
Which is the most important reason for why a data manager would review data before a monitor reviews it?
Data managers write the Data Management Plan that specifies the data cleaning workflow.
Data can be viewed and discrepancies highlighted prior to a monitor's review.
Data managers have access to programming tools to identify discrepancies.
The GCDMP recommends that data managers review data prior to a monitor's review.
Theprimary reasondata managers review data before a monitor’s review is toidentify and flag discrepancies or inconsistenciesso that site monitors can focus their efforts more efficiently during on-site or remote source data verification (SDV).
According to theGood Clinical Data Management Practices (GCDMP, Chapter on Data Validation and Cleaning), proactive data review by data management staff ensures data completeness and accuracy by identifying missing, inconsistent, or out-of-range values. This pre-review helps streamline the monitoring process, reduces the volume of open queries, and enhances data quality.
Option A is true but not the main reason for pre-monitor review. Option C highlights a capability rather than a rationale. Option D is partially correct, but the GCDMP emphasizesprocess purpose, not prescriptive order. Thus,option Bcorrectly captures the practical and process-oriented reason for early data review—to ensure data are ready and accurate for the monitor’s review phase.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 5.3 – Data Review Timing and Purpose
ICH E6(R2) GCP, Section 5.18 – Monitoring and Data Verification Requirements
What significant difference is there in the DM role when utilizing an EDC application?
Data updates are implemented by the sites
Database validation is not required
Metrics generation is required
Tracking of eCRFs is a monitor's responsibility
The most significant difference in theData Manager’s rolewhen using anElectronic Data Capture (EDC)system is thatdata updates are implemented directly by site personnel(Option A).
According to theGCDMP (Chapter: Electronic Data Capture Systems), EDC technology shifts responsibility for data entry and correction from the sponsor or CRO to the investigator site, enabling real-time data entry and validation. This eliminates the need for double entry or remote data transcription, allowing Data Managers to focus onsystem validation, query management, and data quality oversightrather than physical data handling.
However, the EDC system still requires full validation (contrary to Option B). Metrics generation (Option C) and CRF tracking (Option D) are important but not unique to EDC-based workflows.
Thus, the correct answer isOption A – Data updates are implemented by the sites, reflecting the most fundamental operational shift introduced by EDC systems.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Electronic Data Capture (EDC) Systems, Section 4.1 – Role of the Data Manager in EDC
ICH E6 (R2) GCP, Section 5.5.3 – Electronic Data Entry and Responsibilities
FDA 21 CFR Part 11 – Electronic Records and Signatures: Data Entry Responsibilities
A Clinical Data Manager reads a protocol for a clinical trial to test the efficacy and safety of a new blood thinner for prevention of secondary cardiac events. The stated endpoint is all-cause mortality at 1 year. Which data element would be required for the efficacy endpoint?
Drug level
Coagulation time
Cause of death
Date of death
The efficacy endpoint ofall-cause mortality at one yeardirectly depends on thedate of deathfor each subject, makingOption D – Date of deaththe required data element.
According to theGCDMP (Chapter: Clinical Trial Protocols and Data Planning)andICH E3/E9 Guidelines, the primary efficacy analysis must be based on time-to-event data, particularly when the endpoint involvesmortality or survival. Thedate of deathallows accurate calculation oftime from randomization to event, essential for survival analysis (e.g., Kaplan-Meier curves).
Whilecause of death (C)may be collected for safety or secondary analyses,all-cause mortalityspecifically includes any death regardless of cause.Drug levels (A)andcoagulation times (B)may serve as pharmacodynamic or exploratory endpoints but do not directly measure mortality.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Management Planning and Protocol Review, Section 5.4 – Defining Data Required for Endpoints
ICH E9 – Statistical Principles for Clinical Trials, Section 2.3 – Time-to-Event Endpoints
FDA Guidance for Industry: Clinical Trial Endpoints for Drug Development and Approval
Which of the following statements would be BEST included in a data management plan describing the process for making self-evident corrections in a clinical database?
A senior level data manager may make audited changes to the database without further documentation.
Self-evident corrections made in the database will be reviewed and approved by a team leader or manager.
No changes will be made in the database without a query response signed by the investigator.
Self-evident changes may be made per the listed conventions and documented to the investigative site.
Aself-evident correction (SEC)refers to a data correction that is obvious, logical, and unambiguous — such as correcting an impossible date (e.g., 31-APR-2024) or standardizing a known abbreviation (e.g., “BP” to “Blood Pressure”). According to theGood Clinical Data Management Practices (GCDMP), SECs can be applied by data management stafffollowing pre-approved conventions defined in the Data Management Plan (DMP).
The DMP should explicitly describe the criteria for SECs, including the types of errors eligible for this correction method, the required documentation, and the communication procedure to inform the investigative site. The process must maintainaudit trail transparencyand ensure that all changes aretraceable and justified.
Options A and B suggest unauthorized or informal change procedures, which violate audit and compliance standards. Option C is too restrictive, as it prevents the efficient correction of non-clinical transcription or formatting errors.
Therefore,option Dis correct:“Self-evident changes may be made per the listed conventions and documented to the investigative site.”This approach aligns with CCDM expectations for balancing efficiency, accuracy, and regulatory compliance.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 6.2 – Self-Evident Corrections
FDA 21 CFR Part 11 – Electronic Records; Audit Trails and Traceability Requirements