HIV Articles  
Back 
 
 
TDM  
 
 
  "Drug concentration control no apparent help in HIV salvage"
 
NEW YORK (Reuters Health) - There appears to be no significant improvement in outcome of HIV salvage therapy when drug dosages are based on a concentration-controlled intervention (CCI) protocol -- whether the drug selection is guided by rules-based HIV-1 genotype drug-resistance interpretation (GI) or by virtual phenotype drug-resistance interpretation (VPI) -- Italian researchers report in the June 15th issue of Clinical Infectious Diseases.
 
Dr. Carlo Torti of the Institute for Infectious and Tropical Diseases, Brescia, and colleagues note that it is not well defined which of the two main ways of interpreting genotypic drug resistance tests -- GI or VPI -- might be best for HIV salvage therapy. Also, monitoring and adjusting dosage based on plasma concentrations -- CCI -- has not been shown to improve efficacy in prospective studies.
 
To investigate further, the researchers conducted a prospective trial of 230 patients on failing regimens. They were randomized to change treatment after either GI or VPI. They were then further randomized to a control group or to receive CCI.
 
Virological benefit did not differ significantly across groups, although there was a non-significant advantage in those given CCI compared to controls.
 
At week 4, response amounted to 56.8% of patients with an HIV RNA load above 400 copies per mL in the CCI group versus 64.3% in the control group. Corresponding values at week 12 were 63.6% and 74%.
 
The researchers note that although medication adherence was higher in the CCI group, dose adaptation was possible only in a fraction of patients overall because of poor treatment adherence or patient refusal to increase doses.
 
Among independent predictors of virologic response at the end of the 24-week study were higher plasma trough concentrations of protease inhibitors (PIs) and non-nucleoside reverse transcriptase inhibitors. This was also true of regimens containing PIs boosted with ritonavir.
 
The researchers conclude that the results do not support routine use of CCI, but they suggest that practical obstacles such as the lack of patient compliance may have influenced these finding.
 
Clin Infect Dis 2005;40:1828-1836.
 
TDM
 
INTRODUCTION
 
Nowadays, management of antiretroviral treatment failure should include HIV drug-resistance testing [1]. Genotypic drug-resistance tests are faster, less expensive, and more accessible than are phenotypic drug-resistance tests, although interpretation of their results may be problematic [2]. There are 2 major approaches to the interpretation of results of genotypic drug-resistance tests. The first approach, rules-based genotype interpretation (GI), is based on rules defined by experts, and the second approach, virtual phenotype interpretation (VPI), deduces the matching viral phenotypes from a large proprietary database (VirtualPhenotype; Virco) [3]. These approaches to interpretation have not yet been compared in a prospective fashion.
 
So far, no prospective studies have been able to prove any significant advantage of the use of concentration-controlled interventions (CCIs) to improve the efficacy of antiretroviral salvage regimens. However, the main drawback of the studies conducted to date [4, 5] is that CCI was applied at a single time point and late after initiation of salvage treatment. It remains to be proven whether earlier and repeated interventions are more useful. The objective of the present study was to compare the clinical efficacy of salvage regimens selected randomly (1) by use of 2 different HIV-1 genotypic drug-resistance interpretation systems and (2) by use of CCI, with the aim of optimizing plasma drug levels.
 
A Randomized Controlled Trial to Evaluate Antiretroviral Salvage Therapy Guided by Rules-Based or Phenotype-Driven HIV-1 Genotypic Drug-Resistance Interpretation With or Without Concentration-Controlled Intervention: The Resistance and Dosage Adapted Regimens (RADAR) Study
 
Carlo Torti,1 Eugenia Quiros-Roldan,1 Mario Regazzi,3 Andrea De Luca,5 Francesco Mazzotta,7 Andrea Antinori,6 Nicoletta Ladisa,8 Valeria Micheli,9 Anna Orani,10 Andrea Patroni,1,4 Paola Villani,3 Sergio Lo Caputo,7 Francesca Moretti,1 Simona Di Giambenedetto,5 Filippo Castelnuovo,2 Paolo Maggi,8 Carmine Tinelli,4 Giampiero Carosi,1 and the RADAR-MASTER Study Groupa
 
Background. It is not well defined whether concentration-controlled intervention (CCI) and rules-based human immunodeficiency virus (HIV) type 1 genotype drug-resistance interpretation (GI) or virtual phenotype drug-resistance interpretation (VPI) may improve the outcome of HIV salvage therapy.
 
Methods. In a prospective, randomized, controlled trial, patients were randomized (on a factorial basis) to change treatment after either GI or VPI, and they then were further randomized to the control arm (no CCI) or the CCI arm. Protease inhibitor (PI) and nonnucleoside reverse-transcriptase inhibitor (NNRTI) trough concentration (Ctrough) values were determined at weeks 1, 4, 12, and 24 of the study.
 
Results. Among 230 patients, virological benefit (defined by an HIV RNA load of <400 copies/mL at week 24) was not statistically different, either between patients in the GI and VPI arms or between patients in the CCI and control arms.
 
A virological benefit was found for patients in the CCI arm, compared with patients in the control arm, but this benefit was not statistically significant (56.8% vs. 64.3% at week 4 and 63.6% vs. 74% at week 12).
 
Dosage adaptation was possible for only a fraction of patients, because of low rates of treatment adherence or patient refusal to increase dosages.
 
In the logistic regression analysis, independent predictors of virological response at week 24 were a PI Ctrough value and/or an NNRTI Ctrough value in the higher quartiles (or above cutoff levels) and a low number of PIs previously received. Moreover, receipt of a regimen that contained PIs boosted with ritonavir was an independent predictor of virological response.
 
Conclusions. The present study did not support the routine use of CCI for patients undergoing salvage treatment, probably as a result of existing difficulties associated with its clinical application. However, a higher Ctrough value appeared to be correlated with treatment response. No major differences were found between VPI or GI when they are used together with expert advice for the selection of salvage treatment combinations.
 
MORE RESULTS: According to the results of univariate analysis, the HIV RNA load at baseline (OR, 0.5; 95% CI, 0.3-0.8; P = .005), the number of PIs previously received (OR, 0.7; 95% CI, 0.4-0.9; P = .04), patient adherence that was considered to be less than good (OR, 0.2; 95% CI, 0.1-0.5; P < .0001), and plasma Ctrough values below the 25% quartile or less than the predefined cutoff values (OR, 0.4; 95% CI, 0.2-0.9; P = .03) were found to be associated with virological failure at week 24, whereas use of boosted PI regimens was found to have a protective effect (OR, 4.1; 95% CI, 1.7-9.8; P = .002). Multivariable analysis demonstrated that the number of PIs previously received (OR, 0.5; 95% CI, 0.3-0.9; P = .02) and plasma Ctrough values below the 25% quartile or less than predefined cutoff values (OR, 0.3; 95% CI, 0.1-0.8; P = .02) were independent risk factors of virological failure at week 24, whereas use of boosted PI regimens resulted in a protective effect (OR, 4.4; 95% CI, 1.7-11.8; P = .003).
 
AUTHOR DISCUSSION
In the present study, no significant difference in virological outcome was found between patients randomized to change treatment guided by rules-based GI and patients randomized to the VPI arm. In a retrospective analysis, we previously observed better prediction of virological response by use of the GSS from rules-based GIs, compared with scores derived from real-phenotype interpretations or VPIs [28]. It has been demonstrated that, for some drugs, VPI is less stringent than GI [28], and this may affect drug selection differently. However, in the present trial, differences in drug prescription were small, except with regard to didanosine, which was prescribed more frequently for patients in the VPI arm, and lamivudine, which was prescribed more frequently for patients in the G arm. Because the VirtualPhenotype report also includes a list of mutations, experts may have considered additional interpretations of the genotypic drug-resistance patterns, and this may have minimized the effect of differences in drug prescription. Importantly, the use of boosted PI regimens was found to be an independent predictor of virological success, and this may have obscured the effect of the prescribed drugs to which HIV-1 was found to be susceptible. In fact, GSS was not associated with probability of virological suppression in this study, in apparent disagreement with evidence reported elsewhere [28-30].
 
Statistically significant benefit was not apparent in patients randomized to the CCI arm, although a significant concentration-response relationship was observed, with the Ctrough value above the lowest quartile being independently associated with better virological response. Moreover, achievement of a higher Cthrough value could have been a consequence of CCI, probably because of either increased dosage or better patient adherence. Indeed, adherence to medication was higher among patients randomized to the CCI arm. Likewise, the fact that patients knew that they were randomized to the CCI arm may have served as an adherence-promoting intervention, because Ctrough values were communicated to patients and because patients were reminded of the importance of adherence if their Ctrough values were low.
 
The present study raises several issues for discussion. First, the sample size has been calculated to detect a 20% difference between study arms in attaining an HIV RNA load of <400 copies/mL; therefore, smaller differences may not have been detected at a conventional level of statistical significance. Second, dosage adjustment could not have been done for all patients who needed it. Most patients who had suboptimal drug concentrations either admitted low adherence and did not have a dosage increase recommended, or they did not comply with the advice of a physician to increase the dosage. Therefore, careful monitoring and efforts to increase adherence are crucial. In fact, low adherence may complicate interpretation of drug-concentration values and, therefore, CCI applicability. Third, in addition to the wide interpatient variability that makes CCI attractive, some degree of intrapatient variation in the Ctrough may have made single measurements unreliable and target concentrations more difficult to achieve. Overall, these observations emphasize the difficulties associated with the use of plasma drug-level monitoring in clinical practice. Fourth, more information could be collected by integrating pharmacokinetic and HIV drug-resistance results [31], a concept collectively defined as the "inhibitory quotient," which has been shown to be the best predictor for therapeutic success in different cohorts. Additional studies based on our data are underway to further investigate this important topic. Finally, in considering similar results obtained by use of VPI or GI systems, it is important to emphasize that the interpretation for treating physicians was done by experts. In many clinical settings, interpretation by experts is not a usual component of care. Thus, the generalizability of the findings to clinical practice might not be as straightforward as it would have been without the routine involvement of experts.
 
In conclusion, a statistically significant virological benefit derived from CCI intervention was not observed in this study. However, the Ctrough correlated with virological response independently of adherence to treatment and the type of regimen followed. Moreover, practical obstacles that resulted from poor transferability in routine clinical care (such as patients' suboptimal adherence and refusal to increase dosage) were recognized in the present study and should be taken into account to better design and conduct further interventions. Also, in this cohort of patients who were not very heavily pretreated, genotype interpretation done using VPI or rules-based interpretation systems did not translate into major differences in the virological outcome at the principal outcome analysis, probably as a consequence of expert advice. Last but not least, use of boosted PI regimens appeared to increase the virological efficacy of salvage treatment to a significant extent.
 
RESULTS
Patient characteristics. From May 2002 through September 2003, a total of 265 patients (138 patients in the GI arm and 127 patients in the VPI arm) were randomized to treatment. Of these 265 patients, 35 did not start treatment because either the amplification of genetic material or the Virtual Phenotype report did not meet quality-assurance criteria or because patients did not return for the baseline visit (n = 18). The higher dropout rate among patients in the VPI arm, compared with that for patients in the GI arm, possibly was a result of the longer time necessary to obtain the genetic sequence, to send it for analysis, and to obtain the interpretation. As a result, 230 patients started treatment (131 patients in the GI arm and 99 patients in the VPI arm). Characteristics of the 265 randomized patients did not differ significantly from those of the 230 patients who started treatment. Between the GI and VPI arms, there was a significant difference in the mean number of PIs previously received and the mean number of treatment regimens previously received. The prevalence of susceptible HIV-1 isolates was greater in the VPI arm, compared with the GI arm, for zidovudine (66.3% vs. 31%), didanosine (86.6% vs. 16.8%), zalcitabine (87.8% vs. 16.8%), stavudine (86.7% vs. 41.2%), and abacavir (77.6% vs. 31.9%). Consistent with interpretation, didanosine was more frequently prescribed for patients in the VPI arm than for patients in the GI arm (52.5% vs. 22.9%; P < .0001), whereas patients in the GI arm received more triple-class regimens than did patients in the VPI arm (22.6% vs. 8.2%; P = .004).
 
Patients in the control and CCI arms were matched on the basis of demographic and clinical characteristics, except with regard to the mean number of treatment regimens previously received. As shown in table 1, the type of treatment regimen received and the mean GSS did not differ between the 2 CCI and control arms.
 
Study outcomes. There were no statistically significant differences in the percentages of patients with an HIV RNA load of <400 copies/mL at week 24 of the study, either between the GI arm and the VPI arm or between the CCI arm and the control arm. Only at week 4 was a significant difference in the percentages of patients with an HIV RNA load of <400 copies/mL noted between the GI arm and the VPI arm (51.9% vs. 72.1%; P = .004). A higher percentage of patients with an HIV RNA load of <400 copies/mL was found in the CCI arm, compared with the control arm (56.8% vs. 64.3% at week 4 [P = .3] and 63.6% vs. 74% at week 12 [P = .1]). Patients randomized to the VPI/CCI arm were more likely to have an undetectable HIV RNA load, compared with patients randomized to the GI/control arm, but the difference was statistically significant only at week 4. There were no statistically significant differences between arms with regard to the mean decreases in the HIV RNA load.
 
Wide interpatient variability in Ctrough values was found. Overall, intrapatient variability assessed using time-dependent repeated-measure analysis of variance resulted in Ctrough values in the range of -0.33 mg/mL (95% CI, -1.03 to 0.36 mg/mL for week 4 vs. week 1; P = .35) to -0.73 mg/mL (95% CI, -1.44 to -0.36 mg/mL for week 24 vs. week 1; P = .043). The number (percentage) of patients in the CCI arm who had Ctrough values less than the predefined cutoff values for any PI or NNRTI at each of 4 time points in the study was as follows: 27 (25.5%) of 106 patients at week 1; 29 (28.4%) of 102 patients at week 4; 26 (25.5%) 102 patients at week 12; and 29 (29.3%) of 99 patients at week 24. The number (percentage) of patients who had Ctrough values less than the predefined cutoff values for any PI or NNRTI at each of the following 4 time points in the study was higher in the control arm: 33 (37.9%) of 87 patients at week 1, 32 (36.7%) of 87 patients at week 4, 37 (42.5%) of 87 patients at week 12, and 25 (32.5%) of 77 patients at week 24.
 
Among patients in the CCI arm whose Ctrough values were less than the cutoff values, the drug dosage was increased for only 8 (29.6%) of 27 patients at week 1, for 7 (24.1%) of 29 patients at week 4, for 0 of 26 patients at week 12, and for 2 (6.9%) of 29 patients at week 24. Among patients whose drug dosage was not increased, the main reasons for not increasing the dosage were as follows: (1) suboptimal patient adherence to treatment, so that patients and treating physicians agreed to improve adherence behaviors before increasing drug dosages visit (rate of suboptimal adherence: 11 [57.9%] of 19 patients at week 1; 8 [36.4%] of 22 patients at week 4; 15 [57.7%] of 26 patients at week 12; and 14 [51.8%] of 27 patients at week 24); or (2) refusal by the patient to increase the dosage (rate of refusal to increase the dosage: 8 [42.1%] of 19 patients at week 1; 13 [59.1%] of 22 patients at week 4; 11 [42.3%] of 26 patients at week 12; and 13 [48.1%] of 27 patients at week 24). Of the patients who had the dosage increased at week 1, there were 2 (25%) of 8 patients who still had Ctrough values that were less than the cutoff values at subsequent follow-up visits, with both patients having a virological response at week 24. Of the 7 patients who had the dosage increased at week 4, only 1 patient did not reach a Ctrough value that was adequate, although he did have a virological response.
 
Predictors of treatment outcome. The percentage of patients who had a virological response, according to categorization of Ctrough values by quartiles, demonstrated a positive association between higher Ctrough values and virological response, as shown in figure 3. Intriguingly, having a PI and/or an NNRTI Ctrough value in the lowest quartile appeared to be inversely correlated with the degree of adherence to treatment. Of patients who had a mean Ctrough value in the <25% quartile, 63.9% had poor adherence at ⩾2 follow-up visits, whereas, the rates of poor adherence for patients with Ctrough values categorized in the 25%-75% quartiles or the >75% quartile were 30.6% (P = .0007) and 20.4% (P < .0001), respectively. Therefore, to better investigate the independent influence of these factors on virological outcome, logistic regression analysis was performed.
 
MATERIALS AND METHODS
Study design. The Resistance and Dosage Adapted Regimens (RADAR) study was a randomized, open-label, multicenter trial. Patients who were following a failing treatment regimen that included >3 antiretrovirals were included in the study. "Treatment failure" was defined either by (1) a plasma HIV RNA load of >1000 copies/mL after receipt of stable combination therapy for >6 months or (2) an HIV RNA load of >1000 copies/mL after receipt of stable combination therapy for >3 months, with a reduction in the HIV RNA load of <1 log10 copies/mL. The number of treatment regimens previously received was calculated by totaling the number of treatment-change episodes (i.e., any change or addition of a protease inhibitor [PI], a nonnucleoside reverse-transcriptase inhibitor [NNRTI], or abacavir). Substitutions of the remaining nucleoside/nucleotide reverse-transcriptase inhibitors (NRTIs) were counted as 1 treatment-change episode overall.
 
By use of simple randomization lists generated at a central site (1 list was available for each study center), patients were randomized at the coordinating center to receive treatment guided by results of either the Trugene HIV-1 genotyping test (GuideLines, version 5.0; Bayer) (i.e., the GI arm of the study) or the VirtualPhenotype report (version 1.6.1) (i.e., the VPI arm of the study). The same patients were further randomized, on the basis of a factorial design, to receive CCI (i.e., the CCI arm) or not (i.e., the control arm).
 
At each study center, an expert clinician with access to all data recommended the treatment regimen for each patient. One expert clinical pharmacologist was consulted to suggest dosage modifications. When suboptimal plasma concentrations (Ctrough) were observed at the end of a dosage interval, a dose increase was recommended as soon as possible. The following equation, which was based on the assumption that changing the dose would change the Ctrough value proportionally, was used for the calculation of the new dose increase: new dose = (cutoff concentration/measured Ctrough) × previous dose. To be careful, to prevent possible toxicities, and to provide the study centers with a standard procedure, the new dose could not be >40% greater than the previous dose, on the basis of the approximate coefficient of variation of the oral clearance of several antiretrovirals.
 
Study monitoring. Before baseline, HIV RNA load determination and genotype drug-resistance testing were performed, and data on demographic characteristics, risk factors for acquisition of HIV, clinical stage of HIV disease (according to the 1993 classification by the Centers for Disease Control and Prevention [CDC; Atlanta] [6]), and previous and current antiretroviral therapy were recorded. At weeks 1, 4, 12, and 24 of the study, assessments of CCI, HIV RNA, hematological and biochemical parameters, patient adherence to treatment, and clinical characteristics were performed. At the end of the study, the plasma Ctrough values in the control arm were measured retrospectively. The plasma HIV RNA load was measured using the standard method (i.e., RT-PCR [Amplicor; Roche], nucleic acid sequence-based amplification [Organon-Tecknika], or branched DNA [Quantiplex 2.0 branched chain DNA-enhanced label amplification assay; Chiron]) used at each center participating in the study. The threshold HIV RNA load was 400 copies/mL.
 
HIV-1 drug-resistance genotyping was performed using a commercially available system (GeneObjects; VisibleGenetics). For patients in the GI arm, the Trugene HIV-1 genotyping test was used for genotypic interpretation. This rules-based algorithm was based on a review by international experts of the in vitro phenotypic and in vivo response data available as of September 2001. The test results reported were as follows: "no evidence of resistance," "possible resistance," or "resistance." Data on the HIV-1 sequences from patients in the VPI arm were sent by e-mail to Virco, to obtain the VirtualPhenotype report. According to the report of the manufacturer of VirtualPhenotype, the drug-resistance status of an isolate was interpreted as susceptible or resistant if the deduced fold changes in the IC50, compared with those noted for reference isolates, were greater than or less than the biological cutoff levels [3]. Both experts and treating physicians were blinded to the interpretation by the alternative approach.
 
Plasma PI and NNRTI Ctrough values were determined by a central laboratory, by use of a validated high-performance liquid chromatography assay [7]. To define the target Ctrough, the Ctrough values obtained in various clinical studies [8-27] were reviewed. For each of the following agents, the chosen threshold was a Ctrough value that was greater than the lower limit of the 95% CI, under the assumption of a 30% coefficient of variation: for saquinavir, 400 ng/mL; for ritonavir, 1850 ng/mL; for indinavir, 400 ng/mL; for nelfinavir, 1600 ng/mL; for lopinavir, 4000 ng/mL; for amprenavir, 1900 ng/mL; for nevirapine, 3200 ng/mL; and for efavirenz, 2200 ng/mL. Therapeutic dosing of nevirapine has been shown to result in an x2-fold increase in apparent clearance after 2 weeks of treatment; therefore, the Ctrough for nevirapine (3200 ng/mL) was determined at week 2 of treatment.
 
Adherence to treatment was assessed using a visual analogue scale (VAS), and the percentage of antiretroviral drug doses missed by patients in the week preceding the follow-up visit was reported. Adherence was classified as "poor" (>50% of doses were taken), "intermediate" (51%-90% of the doses were taken), or "good" (>90% of the doses were taken). Patients were also asked to answer "yes" or "no" to 2 questions regarding adherence behavior. First, to determine qualitative adherence, patients were asked whether they adhered to the times scheduled for drug intake and to the ingestion of drugs on a full or empty stomach (as prescribed by clinicians). Second, to determine quantitative adherence, patients were asked whether they missed taking any doses or pills that were prescribed. Finally, a composite measurement of adherence was calculated as follows: adherence was considered to be "poor" if it was poor according to the VAS and if patients responded "no" to questions about adherence behavior; "good," if VAS results were good and if patients responded "yes" to all questions about adherence behavior; and "intermediate," for all other patients.
 
Statistical methods. The primary end point of the study was the proportion of patients with virological success (i.e., an HIV RNA load of <400 copies/mL) at week 24. Secondary end points were (1) the absolute change, from baseline, in the HIV RNA load and the CD4+ T cell count and (2) the proportion of patients with a persistent virological response (defined by an undetectable viral load achieved before week 24 and sustained up to week 24). Sample size was calculated under the assumption of a 20% difference in achieving an HIV RNA load of <400 copies/mL at week 24 in the 2 comparisonsthat is, the GI arm versus the VPI arm or the control arm versus the CCI arm (50% vs. 30%; a = 5%, b = 80%). A dropout rate of 15% was projected for each arm. Therefore, a minimum of 230 patients would be required for enrollment in the study.
 
Comparison of patient characteristics at baseline was performed using 2 tests or unpaired Student's t tests, as appropriate. Normal distribution was tested using the Shapiro-Wilk test. Intention-to-treat analysis was the primary analysis performed; information for patients who were still receiving antiretroviral therapy at each time point of the follow-up was imputed, regardless of treatment changes. Repeated-measure analysis of variance was used to test for statistically significant changes in the HIV RNA load and the CD4+ T cell count over time. The 2 test was used to compare the percentages of patients who had an undetectable viral load at each time point of the follow-up.
 
Logistic regression analysis was used to model the association of virological outcome with independent variables. On the basis of the variability of the entire patient population, PI and NNRTI Ctrough values were categorized according to quartile(s) (i.e., <25%, 25% to 75%, and >75%). In the GI arm, Trugene was used to calculate the drug-resistance genotype susceptibility score (GSS) of the new regimen (i.e., the number of drugs to which the HIV genotype was interpreted as being susceptible).
 
The percentage of patients who had an HIV RNA load of <400 copies/mL was considered to be the dependent variable. The following variables were evaluated as covariates: study arms, patient age (>40 years vs. <40 years), sex of the patient, HIV risk factor (injection drug use vs. other risk factors), clinical stage of HIV disease (according to the 1993 classification by the CDC vs. other classifications), hepatitis C virus serostatus (hepatitis C virus antibody positive vs. hepatitis C virus antibody negative), CD4+ T cell count at baseline (per 100 cells/mm3), viral load at baseline (per each log10 HIV RNA copies/mL), number of treatment regimens previously received, number of PIs previously received, the GSS associated with the new regimen, the number of new drugs (i.e., drugs not yet received by patients) in the prescribed regimen that were susceptible, use of a new antiretroviral class, type of regimen followed (3 NRTIs vs. 2 NRTIs + 1 NNRTI vs. 2 NRTIs + 1 NNRTI + 1 PI vs. 2 NRTIs + 1 PI with or without ritonavir prescribed as a booster dose), the plasma NNRTI and PI Ctrough values (categorized either according to the 25%-75% quartiles vs. the >75% and <25% quartiles or as Ctrough values less than the predefined target concentrations vs. Ctrough values greater than or equal to predefined target concentrations), adherence (poor and intermediate vs. good), change in drug dosage, and clinical center.
 
The variables for which P <.2 according to univariate analysis, variables that were clinically meaningful, and variables with a different distribution between study arms at baseline were entered into a multivariate model. Likelihood ratio tests were used to perform an analysis of hierarchical models. Conventional statistical significance was denoted by P < .05. All tests were 2-sided. Analyses were performed using Statistica software (StatSoft) and Stata statistical software, version 7.0 (Stata).
 
 
 
 
  icon paper stack View Older Articles   Back to Top   www.natap.org