HIV Articles  
Back 
 
 
Development of National and Multiagency HIV Care Quality Measures
 
 
  Clinical Infectious Diseases 2010 Aug Epub
 
Michael A. Horberg,1 Judith A. Aberg,4 Laura W. Cheever,5 Philip Renner,6 Erin O'Brien Kaleba,7 and Steven M. Asch2,3
 
1HIV Interregional Initiative, Kaiser Permanente, Oakland, and 2VA Greater Los Angeles Healthcare System, David Geffen School of Medicine at UCLA, and 3RAND Corporation, Los Angeles, California; 4Department of Medicine, New York University School of Medicine, New York, New York; 5Health Resources and Services Administration, HIV/AIDS Bureau, Rockville, Maryland; 6National Committee for Quality Assurance, Washington, DC; and 7Alliance of Chicago Community Health Services, Chicago, Illinois
 
ABSTRACT
 
Background. Human immunodeficiency virus (HIV) is now a complex, chronic disease requiring high quality care. Demonstration of quality HIV care requires uniform, aligned HIV care quality measurement.
 
Methods. In September 2007, the National Committee for Quality Assurance, under contract with the Health Resources and Services Administration, the Physician Consortium for Performance Improvement of the American Medical Association, and HIV Medicine Association of the Infectious Disease Society of America jointly sponsored and convened an expert panel as a HIV/AIDS Work Group to draft national HIV/AIDS performance measures for individual patient-level and system-level quality improvement.
 
Results. A total of 17 measures were developed to assess processes and outcomes of HIV/AIDS care for patients established in care, defined as having at least 2 visits in a 12-month period; thus, measures of HIV screening, testing, linkage, and access to care were not included. As a set, the measures assess a wide range of care, including patient retention, screening and prophylaxis for opportunistic infections, immunization, and initiation and monitoring of potent antiretroviral therapy. Since development, the HIV/AIDS measures' specifications have been fully determined and are being beta tested, and a majority have been endorsed by the National Quality Forum and have been adopted and implemented by the sponsoring organizations.
 
Conclusions. HIV care quality measurement should be assessed with greater uniformity. The measures presented offer opportunities for such alignment.
 
Human immunodeficiency virus (HIV)/AIDS has progressed to a complex, chronic disease since the early 1980s [1, 2]. HIV-infected patients now have greatly increased life expectancy through the use of appropriate potent antiretroviral therapy, prophylaxis and/or treatment for opportunistic infections, routine vaccinations, general medical health screenings, promotion of healthy activities (including safer sex), and retention in medical care [3]. However, this care is multifaceted and costly and requires the coordinated efforts of many health care professionals [4-7].
 
Many organizations have sought to measure the effectiveness and quality of their HIV care delivery but not necessarily in a coordinated, aligned fashion. The New York State Department of Health AIDS Institute (NYSAI) developed the HIV Quality of Care Program in 1992 [8]. This program is responsible for the systematic monitoring of medical care quality and support services provided to people infected with HIV in New York's hospitals, long-term care facilities, community health centers, drug treatment programs, community-based organizations, and most recently, the managed care HIV Special Need Plans. The measures were developed, expanded, and maintained by a selected group of NY State HIV providers and NYSAI staff. Kaiser Permanente, the largest private provider of HIV care in the United States, began an HIV care quality measurement and improvement program in 2006, measuring HIV diagnosis, access to and retention in care, care processes, and outcomes, leading to HIV care quality improvement programs at Kaiser Permanente [9]. The Veterans' Administration, the largest public provider of HIV care in the United States, has had HIV care quality improvement and measurement research and implementation since 1999 [10]. However, none of these measure development efforts described were coordinated or aligned.
 
More recently, The Ryan White Program of the US Department of Health and Human Services, Health Resources and Services Administration (HRSA), HIV/AIDS Bureau (HAB), developed a set of performance indicators, called HIVQUAL, based on the NYSAI measures. HIVQUAL sought to assess and improve the quality of care delivered to HIV-infected persons at Title III (now Part C) federally funded programs, later expanded to Part D grantees [11-13]. The Institute of Medicine, in reviewing the HRSA/HAB quality improvement portfolio in their report, Measuring What Matters: Allocations, Planning, and Quality Assessment for the Ryan White CARE Act [14], recommended that the HAB further develop a standardized set of performance measures at the clinic level and at the system level for all Ryan White grantees. The HAB released these for public comment in spring 2007.
 
Given multiple sets of unaligned measures for HIV care, there was growing recognition that a coordinated, multistakeholder effort was critical and could help standardize and focus quality improvement efforts while alleviating conflicting reporting requirements. Moreover, as the Institute of Medicine report [14] had underscored, a standard set of measures could more easily be applied across multiple delivery platforms (public, private, large, and small clinics), allowing for illuminating comparisons and longitudinal tracking. HRSA contracted with the National Committee for Quality Assurance (NCQA) in 2006 to conduct a comprehensive environmental landscape of existing measures and evidence-based guidelines for HIV/AIDS care. The NCQA, in turn, partnered with the American Medical Association (AMA)-convened Physician Consortium for Performance Improvement (PCPI), the HRSA, and the Infectious Diseases Society of America (IDSA) HIV Medicine Association (HIVMA) to establish a single set of aligned HIV quality measures for care processes and intermediate outcomes for external accountability and individual quality improvement. The methodology for arriving at this single set of HIV/AIDS measures is described below.
 
Discussion
 
The Work Group developed and achieved widespread endorsement of a single set of HIV care quality measures that reflect retention in care, appropriate care delivery processes and interventions, and outcome of treatment with accountability for that treatment. Taken together, they represent the most important aspects of HIV care that impact the greatest number of HIV-infected individuals in the United States today.
 
HIV is a complex but manageable disease. As such, measurement of quality of care is an essential component of successful therapy [16]. However, measurement alone is insufficient. Once initial measurement is made, it is important that these measurements be used to create quality improvement programs to rectify gaps in care and to ensure continued success where appropriate high-level performance is measured [16]. Levels of success have not been established for these measures, because this would require assessment of present performance across a variety of treatment platforms and consideration of risk adjustment for many clinical settings. Although 100% success is rarely feasible or likely for any individual measure, providers, clinics, and health care systems are encouraged to critically review their performance on each measure and to set reasonable goals for improvement. Furthermore, measurement across different health care systems (including public and private systems) is an essential component of comparative effectiveness and general quality assessment [50]. Eventually, performance levels may be established by the various sponsoring organizations, hopefully in concert with each other.
 
It is recognized that different sponsoring organizations or others who adopt these measures may need to adapt some of the measures for their specific needs and reporting limitations [47-49]. However, despite adaptations made, the essence of the measures should still remain intact, allowing for eventual benchmarking over time. The Work Group tried to consider most situations in their deliberations but recognized that modification was likely, as is seen with other disease processes. Furthermore, some measures would likely not be feasible across all platforms. Not all organizations may offer all the services and care that are measured.
 
There are some important limitations to these measures. As noted, they do not reflect all aspects of HIV care. Most measures related to pregnancy processes and outcomes are not included. Pediatric measures are similarly not present, although the HRSA/HAB is developing a set of these measures at this time. Furthermore, measures reflecting complications of therapy, screening for other comorbidities (including cardiovascular risk or gynecologic or proctologic complications), and genotyping were not included in this first set of measures. However, the measures implemented reflected the ones deemed to be the highest priority for initial consideration. The Work Group charge did not include screening for HIV infection among the general population nor access to care for those who have recently received an HIV diagnosis. The NQF specifically noted these latter deficiencies and urged the Work Group to create such measurements. It should be noted that many of these measures that are not included have been used in some manner by a variety of care organizations [9].
 
In conclusion, HIV care requires consistent assessment of care delivery and desired outcomes. Quality measurement, joined with quality improvement, is an essential aspect of such assessment. The AMA, NCQA, IDSA, HIVMA, and HRSA recognize this and have jointly created and received approval and/or endorsement for a single, aligned set of quality measures reflecting care processes and outcomes for HIV-infected individuals. Providers and health care systems are urged to implement and report these measures. Quality improvement programs should be derived and enacted on the basis of these measurement results.
 
Methods
 
Measure development process. The sponsoring organizations provided funding via contract and administrative support to convene an HIV/AIDS Expert Panel Work Group (hereafter referred to as the "Work Group," which was cochaired by 2 of the authors [M.H. and J.A.]), comprised of multistakeholder HIV/AIDS care providers, researchers, and performance measurement methodologists, with the explicit goal to arrive at a set of evidence-based measures that address known gaps in care, which could be specified for use with any data source. The organizations' specific charge did not include HIV screening for the general population nor access to care, because patients had to be established in care for these measures to apply. Furthermore, the Work Group chose to defer addressing measures related to pregnancy of HIV-infected mothers and HIV-related pediatric care, given the need for additional expertise to develop these specific measures. The scope of work specifically included retention in care, screening and prophylaxis for opportunistic infections, immunizations, and initiation, monitoring, and outcomes of antiretroviral therapy. The Work Group maintained measures needed to be amenable to quality improvement and for which the data needed to calculate them would be readily available in automated or paper medical records [15].
 
During spring 2007, nominations to serve on the Work Group were extended and accepted (for the Work Group roster, see Appendix A, which does not appear in the print version of the journal). Prior to an in-person meeting, members of the Work Group were supplied relevant evidence-based guidelines for the treatment of HIV/AIDS and the NCQA's environmental scan of existing performance measures, including measures previously referenced above. Both HRSA/HAB and IDSA/HIVMA called for public comments on the HRSA/HAB-proposed HIV Clinical Performance Measures in May 2007, and the >800 comments were reviewed by the Work Group. On 16 August 2007, the Work Group had its initial face-to-face meeting in Washington, DC. Discussions included reviewing known gaps in care, existing measures and evidence-based guidelines, data-collection challenges, and key issues from the comments received. By the end of the in-person meeting, the Work Group achieved consensus on a draft set of initial measures. Following the meeting, the Work Group had several conference calls to further refine the measures. The final set of draft measures was then posted for a 30-day public comment period from 15 November 2007 until 14 December 2007.
 
Public comment period. The public comments received helped the Work Group refine the measures. There were 455 separate comments from 75 organizations, addressing all measures. Of the 75 commentators, 28 (37.3%) gave their affiliations as hospitals and physician practices, with 13 (17.3%) from managed care organizations; 12 (16.0%) from local, state, and federal government; and 11 (14.7%) from academic institutions. Of the 455 comments, 174 (38.2%) were in support of the measures, and 221 (48.6%) of the measures supported the measures with modifications. Only 60 (13.2%) of the comments did not support the measures. Most of the comments dealt with the strength of the underlying guidelines, patient selection for the denominator of the measures, frequency of measurements, and concerns with confidentiality of HIV patient data.
 
On 7 January 2008, the panel began conference calls to thoroughly review the public comments. A final draft of measures was developed for approval and endorsement.
 
Intended audience and patient population. These measures are designed for use by providers to calculate performance at the provider level and system level where appropriate (including public and private care clinics). When existing hospital-level or plan-level measures are available for the same measurement, attempts to harmonize the measures to the extent feasible are encouraged. These measures are appropriate for any health care professional (including but not limited to a primary care clinician, obstetrician/gynecologist, pediatrician, and infectious diseases specialist) who provides routine primary care for HIV-infected patients. The use of these measures by nonphysician health care professionals is encouraged.
 
Determinants of measure selection. The Work Group considered many priorities in determining the selection of measures. On the basis of the direction of the Institute of Medicine [14, 16], the priorities included importance (including prevalence of the condition and potential for improvement), scientific soundness, consensus of care standards, and feasibility. The Work Group considered whether there was already a nationally endorsed measure in existence for a given aspect of care that would already include HIV-infected patients, such as smoking cessation counseling. At the same time, the Work Group tried to achieve parsimony in the total number of measures developed by identifying aspects of care that are clinically tied with other processes of interest. For example, while both CD4 T cell count and HIV RNA level (viral load) are important, the 2 laboratory measurements usually are ordered concurrently, so only CD4 cell count was included as the process measure.
 
Feasibility was a major determinant in which measures were included in the final set. Many measures that were deemed valuable for the management of an HIV-infected patient were too difficult to specify and measure in all care systems and were excluded from further consideration. For example, although antiretroviral therapy adherence is crucial to successful treatment, most health care entities do not have access to either pharmacy records or standardized adherence measurement, which decreases the feasibility of collecting data for such a measure. In addition, although measuring HIV status in infants at 18 months after delivery is extremely important, it is often not feasible to match mother and child across payers (eg, the mother may be a Veterans Affairs patient while the baby is covered by the father's private insurance), which precludes effective care quality measurement.
 
For each measure, the Work Group determined the level of accountability. The Work Group wanted to develop measures that would allow an individual provider to understand her or his performance, while recognizing that care is often delivered by a team or larger entity. Therefore, each measure was labeled as appropriate at the provider level or system level, with an understanding that many measures can and should be applied at both levels for complete assessment. For example, hepatitis B vaccination ordering is a provider's responsibility, but usually the delivery of all doses is the system's responsibility. We note that, in some situations, the provider and the system are synonymous.
 
Specification for use with multiple data sources.For widest applicability, the measures need to be feasible across a variety of data platforms. The Work Group sought to specify measures for implementation using multiple data sources, including paper medical records, administrative (claims) data, and particular emphasis on Electronic Health Record Systems (EHRS). Ideally, all measures should be derivable from electronic databases and health care records, in keeping with national goals (ie, meaningful use criteria). EHRS allows for measurement of clinical outcomes that is not possible with claims data alone, such as achieving viral load below limits of quantification and not just whether viral load testing was ordered. Furthermore, EHRS facilitates measuring care for an entire population, rather than just a sample of patients. The specifications for these measures include the following components: (1) definition of numerator, denominator, and exclusions; (2) description of what data elements, including coding schemes, are necessary to identify the numerator and denominator (eg, International Classification of Diseases, Ninth Revision codes or Current Procedural Terminology [CPT] codes); and (3) an algorithm for how to calculate the measure. To facilitate reporting in national programs, such as the Physician Quality Reporting Initiative, that rely on prospective reporting of performance, the AMA assigned CPT category II codes to most of these measures.
 
Results
 
Measures selected. The Work Group developed or adapted 17 measures for inclusion in this initial set of national measures. Measures addressed processes of care, including retention in care, appropriate health screening, prophylaxis, immunizations, and prescription of antiretroviral therapy, as well as intermediate outcome measures and accountability for not achieving desired outcomes. To ensure that quality was being measured for a provider's routine patients (rather than for a patient seen only once), the Work Group used a modified NYSAI definition of "being in care" as having at least 2 visits in a year with a minimum of at least 60 days between visits [8]. Table 1 lists each measure (including level of evidence) and Table B1 (in Appendix B, which does not appear in the print version of the journal) delineates each measure, including specifications (numerator, denominator, and exclusions).
 
Each measure was defined as explicitly as possible to decrease misinterpretation by implementers. For example, in the 2 behavioral health screening measures (injection drug use and high-risk sexual behavior), "screening" is defined as documentation either that a discussion regarding high-risk behavior took place or that a standardized tool for assessing high-risk behavior was used. The Work Group recognized that there are not standardized, validated tools for successful screening of these behaviors. Development of such screening tools is underway.
 
"Medical visit" is any visit with a health care professional who provides routine primary care for HIV-infected patients (which may be but is not limited to a primary care clinician, obstetrician/gynecologist, pediatrician, or infectious diseases specialist). "Potent antiretroviral therapy" is described as any antiretroviral therapy that has demonstrated optimal efficacy and results in durable suppression of HIV as shown by prior clinical trials. Rather than being prescriptive about exactly which medication combinations would satisfy this measure, the Work Group refers providers to the most recent Department of Health and Human Services, HIVMA, or International AIDS Society-USA guidelines for specific recommendations. "Below limits of quantification" for viral load refers to the laboratory cutoff level for the reference laboratory used by that clinic or provider, since there are not standard cutoffs used by national laboratories.
 
The Work Group was not uniform in its opinion of how antiretroviral therapy change should occur with persistent viremia, so the measure required only documentation of a plan (measure 17).
 
Approvals and endorsements. In May 2008, the measures were formally approved by each of the sponsoring organizations. Following these approvals, the Work Group submitted the measures to national endorsing and selection organizations, including the National Quality Forum (NQF) and the Ambulatory Care Quality Alliance (Table 2). During the summer of 2008, an NQF panel on infectious diseases met to review the measures for possible endorsement. The majority of measures received a 2-year time-limited endorsement, with ability to receive full endorsement after review of pilot-testing data (ongoing). For measures that already existed in the general population, such as influenza and pneumococcal vaccination, the NQF referred the measures to an established committee working to align immunization measures.
 
Implementation. Since the measures have been endorsed by NQF and formally approved by the AMA, NCQA, IDSA HIVMA, and HRSA, they are available for use and reporting. The measures are presently being "beta tested" by a few organizations, including Kaiser Permanente (which adjusted its measures' definitions to align with these developed measures) and a network of community health centers in Chicago called the Alliance of Chicago Community Health Services. Beta testers have reported some lessons about the feasibility, reliability, and validity of the measures. For example, Kaiser Permanente discovered that many of the measures (retention in care, PCP prophylaxis, prescription of antiretroviral therapy, and achieving maximal viral control) can be accomplished with EHRS and no manual chart review is required, but screening for risk behaviors is not easily measured by EHRS (M.A.H., personal communication).
 
In addition, HIVMA reconvened the HIV Primary Care Panel to revise their practice guidelines, and revised guidelines were developed in conjunction with these performance measures, with the intent that the performance measures would be used [3]. Notable was the change from an emphasis on adherence to medication to adherence to care and the need for routine general medical care consistent with the Centers for Disease Control and Prevention Advisory Committee on Immunization Practices and US Preventive Services Task Force. Furthermore, the AMA PCPI established CPT-II codes to help reporting of these measures (see Table 2 for availability of CPT-II codes) and inclusion in the 2010 Physician Quality Reporting Initiative [46].
 
Some measures required modification for use by various entities. HRSA/HAB had different provider expectations regarding retention in care, compared with privately funded sources, and continued their previous quality measures (which are markedly similar to these measures) [47-49]. As another example, many large health care organizations do not use CPT-II codes, and the 2 behavioral screening measurements may not be reportable using electronic databases only, a consideration for the NCQA in its implementation for the Healthcare Effectiveness Data and Information Set.
 
Note that these measures could be used by providers for certification in national incentive programs (such as the Physician Quality Reporting Initiative). Furthermore, many of these measures could serve as quality improvement modules as required for specialty recertification.
 
 
 
 
  iconpaperstack view older Articles   Back to Top   www.natap.org