HIV Articles  
Back 
 
 
Grinding to a Halt: The Effects of the Increasing Regulatory Burden on Research and Quality Improvement Efforts - IDSA POLICY PRINCIPLES
 
 
  Infectious Diseases Society of America
Infectious Diseases Society of America, Alexandria, Virginia
 
aThis article (written by William Burman and Robert Daum) was developed for the Infectious Diseases Society of America (IDSA) Research Committee: Edward Janoff (chair), Paul Bohjanen, Helen Boucher, William Burman, Richard D'Aquila, Barry Eisenstein, Carol Kauffman, Clifford Lane, David Margolis, Gary Marshall, Debra Poutsiaka, Adam Ratner, Barth Reller, Louis Rice, Edward Ryan, Paul Spearman, Chloe Thio, and Padma Natarajan (Research Committee staff). It was approved by the IDSA Board of Directors on 4 February 2009.
 
Clinical Infectious Diseases Aug 2009
 
The Infectious Diseases Society of America is concerned that excessive regulatory oversight is seriously affecting translational research and quality improvement efforts. Careful studies on the subject of research oversight have documented the adverse effects of regulatory burden on clinical, epidemiological, and health systems research. We identified 5 problem areas. First, the application of the Health Insurance Portability and Accountability Act to research has overburdened institutional review boards (IRBs), confused prospective research participants, and slowed research and increased its cost. Second, local review of multicenter studies delays research and does not improve protocols or consent forms. Third, reporting of off-site adverse events to local IRBs is wasteful of the resources of sponsors, investigators, and local IRBs and does not add to participant safety. Fourth, uncertainties about key terms in the regulations governing pediatric research lead to marked differences in the ways that local IRBs review research involving children. Fifth, the lack of consensus on when IRB review is required for quality improvement efforts is slowing progress in this critical area. Relatively simple steps, which do not require legislation or a change in the Common Rule, could improve regulatory oversight in these problem areas.
 
Summary: Restoring the Balance in Research Oversight
 
As an organization devoted to the prevention and care of infectious diseases, the IDSA reiterates its commitment to responsible research oversight. Both for the protection of research participants and to foster public trust in the process, research oversight is critical. However, time and resources are finite, and there are urgent needs for research on many illnesses. The evidence from careful studies provides compelling evidence that the current system includes practices that delay research and increase its costs while failing to contribute to the safety or privacy of research participants.
 
It will be critical that the much-needed public discourse on appropriate regulatory oversight for research and quality improvement be framed in a broader context than has been true in recent years, a perspective that acknowledges the rare and reprehensible instances of investigator fraud or inattention to participant safety, but one that also provides data on how inefficiencies and redundancies in the current system unduly delay vital research. Patients and disease advocacy groups, as well as researchers and regulators, need to be a part of this discussion. The need for research and the need for oversight are not competing agendas; they are 2 pillars that support the research enterprise. It is time to restore the balance.
 
Epidemiological and clinical research is important in every field of medicine, particularly for infectious diseases. Interactions between humankind and the microbial world are remarkably dynamic; new infections are discovered, and previously-described pathogens spread to new areas and develop enhanced virulence and antimicrobial resistance. There have been tremendous successes in research on infectious diseases. Within 3 years of the first clinical description of AIDS, the pathogen had been identified, and soon thereafter, therapy was developed that has saved hundreds of thousands of lives. Such progress requires a flexible research infrastructure that can assimilate new ideas and respond quickly to urgent research questions.
 
Six years ago, Califf and Muhlbaier [1] warned that "the system of research could become increasingly paralyzed as most of the transaction costs for research may be exhausted in response to regulations that have no useful purpose" (p. 917). Evidence gathered in the subsequent years has heightened concerns about excessive regulatory burden on translational research and quality improvement efforts. The Infectious Diseases Society of America (IDSA) is concerned that the research infrastructure in the United States is slowly grinding to a halt under this increasing burden of ineffective regulatory oversight (and similar problems have been noted in other countries) [2-5].
 
Institutional review boards (IRBs) are overwhelmed by the application of the Health Insurance Portability and Accountability Act (HIPAA) to research. Federally sponsored studies are being delayed and becoming more expensive as a result of regulatory burden [6-8]. Industry-sponsored clinical trials have largely left academic medical centers and are now moving out of the United States [9]. Quality improvement efforts are held up by uncertainties about when and how IRB review should be done. Finally, increasing regulatory burden is a major disincentive to trainees who are considering a career in research [10-12].
 
We are concerned about the current oversight system, but we are in complete agreement about the need for independent review of research involving humans. The unfortunate history of abuse of vulnerable subjects in research must not be repeated. To agree on the need for independent oversight, however, is not to defend the redundancies and inefficiencies that consume resources and delay research but do not contribute to the safety and privacy of research participants. The subject of research oversight has itself become the subject of careful quantitative research. We used this literature to identify 5 areas in which pragmatic steps can be taken to improve research oversight (Table 1)-steps that would not require new legislation or a change in the Common Rule [13].
 
The Example of HIPAA
 
HIPAA legislation was enacted to facilitate electronic billing, improve privacy protections, and promote continuity of health insurance coverage [14]. Notably, an advisory committee for the Department of Health and Human Services (DHHS) "identified no instances of breaches of confidentiality resulting from researcher use of records" [15] and noted the confidentiality protections that have long been a part of research oversight. Despite the lack of evidence of a problem and over the strong objections of the research community [16, 17], DHHS included research in HIPAA regulations [18]. As a result, many more forms of investigation and quality improvement require review, and an "authorization form" was added to the consent process [19].
 
The negative repercussions of HIPAA have echoed throughout the system. The workload of IRBs increased [20] at a time when they were already overloaded [21-23]. HIPAA authorization forms average 2 pages and use complex, legalistic language [19, 24, 25] unlikely to be understood by study participants [26]. In 2 controlled trials, prospective participants randomized to receive a HIPAA authorization form were less likely to enroll in a study than were participants who received only the informed consent document [27, 28].
 
A wide variety of research has been adversely affected by HIPAA [6, 29, 30], and the cost of doing multicenter studies has increased [7, 31]. Enrollment in epidemiological cohort studies and some clinical trials decreased markedly [7, 31-33], and selection biases were introduced [32, 33]. Health systems research has been particularly compromised [20, 30, 34, 35]. Although HIPAA regulations allow research on de-identified data without patient consent, the removal of HIPAA-defined identifiers from medical records resulted in a 31% reduction in data, including information of vital importance for research and quality improvement [36].
 
We are particularly concerned about HIPAA's effects on medical record reviews, because such studies are often the initial exposure of trainees to research. Medical record review introduces patient-oriented research, and its retrospective nature often allows completion of a project in the limited time available during training. In the post-HIPAA era, nearly all record reviews are judged to require IRB approval, and an increasing percentage are sent for full-committee review [20]. Even expedited review often requires 1-3 months [37]-a delay that may preclude completion of a project.
 
The application of HIPAA to research is a lesson in unintended consequences. HIPAA legislation was not directed toward research, and there was no need to augment the existing confidentiality protections. Six years later, prospective participants are confused by authorization forms, IRBs are even more overburdened, research takes longer and costs more, and investigators are discouraged by the resulting "thicket of regulatory ambiguity" [6]. The Secretary of DHHS should remove research from the purview of HIPAA, as part of a "new framework for ensuring privacy" [30].
 
Redundant Review of Multicenter Studies
 
Many clinical trials and epidemiological studies require multiple sites to accrue participants and produce generalizable results. Traditionally, each study site submits the protocol and informed consent document to its own local IRB. Local review is said to be important to assure that unique aspects of the local study population are dealt with appropriately (Table 2). Thus, a multicenter study may be reviewed by hundreds of IRBs.
 
Local review of multicenter studies requires substantial effort and expense. Sites in a tuberculosis study estimated that submission required a median of 30 h of staff time [39]. Local IRB review of a multicenter observational study required 15,000 pages of documents and consumed 16.8% of the entire budget [40]. Local review also delays study implementation; the median times to approval for multicenter protocols ranged from 1.5 to 15 months [2, 8, 34, 39, 45, 52, 56].
 
The outcomes of local review of multicenter studies have not been reported in detail for a large number of studies, but the available data are quite consistent. Study protocols are seldom changed, but local IRBs often have markedly different interpretations about review of multicenter studies of a wide variety of types: pediatric [43, 44, 54], epidemiological [45, 57], health services [3, 34, 46, 47, 58], and minimal risk research [48, 49].
 
Changes in consent forms are usually required during local review [39, 42, 45]. In studies that have carefully evaluated these changes, consent forms became longer [51] and more complex [39]. Indeed, local IRBs often require complex language to be used in consent forms [59]. Finally, errors in the study description or in the description of possible adverse effects have been made and approved during local review [39, 42].
 
In summary, local review of multicenter protocols delays study implementation and consumes valuable resources of local IRBs and investigators. That neither protocols nor consent forms are improved in the process (Table 2) strongly suggests that local review of multicenter studies is another unnecessarily redundant part of the system.
 
Steps to Increase Use of Central Review
 
Federal regulations allow one IRB to rely on the review of another IRB [60], allowing central or cooperative review of multicenter studies. Since being introduced by the National Cancer Institute (NCI) [61], the idea of central IRB review has slowly gained ground. The 2 NCI central IRBs have now been accepted by 600 local IRBs [62], and other federal agencies have begun to use the model [63, 64]. We recommend that all major institutes and centers at the National Institutes of Health (NIH) develop a central IRB for multicenter studies.
 
Despite encouragement from the Office for Human Research Protection (OHRP) and the US Food and Drug Administration (FDA), cooperative review is underused [65]. Local institutions continue to have concerns or lack of familiarity with central review [66, 67]. NIH and other federal agencies that fund research should develop incentives for central IRB review; applicants who use a central IRB could receive points toward the peer-reviewed score of a grant application.
 
Adverse Event Reporting
 
Careful monitoring of adverse events is critical in interventional studies; despite extensive preclinical testing, there may be serious unanticipated side effects from new treatments [68, 69]. Data centers for multicenter trials have sophisticated systems for reporting and analysis of adverse events. Reports are completed over the internet and analyzed using software packages and professional review. The data center can review real-time data, by assigned treatment arm. If concerns are identified, they can be reviewed with the Data Monitoring Committee, an independent committee of subject experts and biostatisticians. This 21st century system is the way that human subjects are and should be protected in interventional biomedical research.
 
Despite this robust method for monitoring patient safety in multicenter trials, there is a parallel system of adverse event reporting (Figure 1). Reports of serious adverse events (often as paper documents) are sent to all other investigators using the same study medication or device. Investigators review these reports and forward copies to their IRB. The local IRB reviews and stores these reports, consuming 9% of its resources in the process [53]. Importantly, neither the investigator nor the IRB have access to data elements-study assignment and denominators-that would make adverse event reports meaningful. OHRP and FDA agreed that this parallel system is not required by the Common Rule [70] and that it has the effect of "inhibiting rather than enhancing IRBs' ability to adequately protect human subjects" [71]
 
Thus, there is general agreement that the system of adverse event reporting includes a redundant and expensive process that does nothing to improve patient safety. OHRP and FDA have responded to this situation with updated guidance documents [70, 72]. Unfortunately, these 2 documents differ in several important ways, leading to continued uncertainties about adverse event review.
 
The responsibility for adverse event analysis from multicenter studies lies with data centers and data monitoring committees; IRBs and site investigators should have no role, other than responding to a finding by a data monitoring committee. OHRP and FDA should develop consensus guidance on adverse event reporting. It is notable that most highly publicized cases of serious injury to research subjects have been in single-site studies [73, 74]. Freed of the wasteful effort of reviewing adverse events from multicenter studies, local IRBs can focus on reviewing reports from single-site studies.
 
Barriers to the Involvement of Children in Research
 
Children are unable to provide fully informed consent for participation in research and, therefore, have an enhanced level of regulatory protection. However, children have frequently been excluded from research. In the absence of data on pediatric-specific side effects or pharmacokinetics, new treatments have been used off-label for children [75]. Thus, an overzealous effort to protect children can have the paradoxical effect of harming children when lack of inclusion in research leads to use of inappropriate medications or inappropriate doses in children [76].
 
The Common Rule contains sections on oversight of pediatric research [77]. However, uncertainties about the interpretation of regulatory terms used to classify pediatric research ("minimal risk") and institutional risk aversion has led to markedly different decisions about pediatric trials by local IRBs [54]. The Common Rule allows national-level review by a panel of pediatricians and bioethicists to provide guidance on studies which raise concerns at the local level (the "407 process"). Although well-intentioned, the "407 process" has been so slow as to be a major impediment to research, requiring a median of 27 months for decisions about proposed pediatric trials [55].
 
We recommend that OHRP work with pediatric researchers, the IRB community and bioethicists to provide clarity about key definitions for pediatric research. Furthermore, OHRP should continue its efforts to streamline the "407 process."
 
Review of Quality Improvement Projects
 
In recent years, quality improvement projects have been emphasized and required as a means of improving the health care system. At the same time, IRBs have become increasingly involved in review of quality improvement efforts. However, the lack of consensus [78] regarding when and how IRBs should review quality improvement activities was highlighted by a recent high-profile case. The Michigan Hospital Association evaluated the effect of a simple checklist on catheter-related bacteremia. The project was reviewed by the IRB of one of the consulting quality improvement experts and was judged to not be research, because all items on the checklist were part of national standards. The project was strikingly successful in decreasing rates of catheter-related bacteremia [79], and plans were made to disseminate it to other hospitals. OHRP reviewed the project after its publication and determined that the project was research and had not been adequately reviewed [80]. In the ensuing outcry from hospital administrators and quality improvement officers [81], OHRP eventually reversed its decision [82], but the chilling effects of OHRP's handling of this case are likely to affect review of quality improvement activities for some time.
 
HIPAA regulations led to the perception that review of patient records by someone other than a direct care provider requires IRB review, particularly if there is intent to publish. However, a recent multidisciplinary panel of bioethicists, quality improvement officers, and regulatory officials reached very different conclusions [50]. The panel noted that both patients and providers have an ethical obligation to participate in quality improvement efforts, a fundamental distinction from research. The panel proposed that most quality improvement efforts should not be reviewed by an IRB, even when there is an intention to publish the outcomes. The panel's deliberations provide a fresh perspective that is needed to move the field beyond post-HIPAA hyper-expansiveness.
 
Fund OHRP at a Level Consistent with its Broad Mission
 
Several of the recommendations above call for actions from OHRP, but this agency remains critically underfunded. Despite being responsible for a broad range of policy issues and oversight of thousands of IRBs, OHRP is a small agency, with a budget that has not kept pace with inflation (2008 budget of $4.7 million) [83]. Congress should increase funding for OHRP, coupled with a mandate to provide policy guidance on the subjects outlined above.
 
Summary: Restoring the Balance in Research Oversight
 
As an organization devoted to the prevention and care of infectious diseases, the IDSA reiterates its commitment to responsible research oversight. Both for the protection of research participants and to foster public trust in the process, research oversight is critical. However, time and resources are finite, and there are urgent needs for research on many illnesses. The evidence from careful studies provides compelling evidence that the current system includes practices that delay research and increase its costs while failing to contribute to the safety or privacy of research participants.
 
It will be critical that the much-needed public discourse on appropriate regulatory oversight for research and quality improvement be framed in a broader context than has been true in recent years, a perspective that acknowledges the rare and reprehensible instances of investigator fraud or inattention to participant safety, but one that also provides data on how inefficiencies and redundancies in the current system unduly delay vital research. Patients and disease advocacy groups, as well as researchers and regulators, need to be a part of this discussion. The need for research and the need for oversight are not competing agendas; they are 2 pillars that support the research enterprise. It is time to restore the balance.
 
Potential conflicts of interest. All authors: no conflicts.
 
 
 
 
  iconpaperstack view older Articles   Back to Top   www.natap.org