Innovative treatment as a precursor to clinical research.

Citation metadata

Date: Aug. 1, 2021
From: Journal of Clinical Investigation(Vol. 131, Issue 15)
Publisher: American Society for Clinical Investigation
Document Type: Article
Length: 2,456 words
Lexile Measure: 1380L

Document controls

Main content

Article Preview :

Reliance on randomized controlled trials (RCTs) as the gold standard for assessing therapeutics began in 1948, with the widely cited trial of streptomycin for pulmonary tuberculosis (1). Less widely recognized is the fact that the trial was preceded by a retrospective analysis of 92 cases of innovative treatment for miliary and meningeal tuberculosis (2). Since then, the use of innovative treatments prior to formal trials has fallen out of favor and is now actively discouraged.

"Innovative" treatments are treatments that "depart in a significant way from standard or accepted practice" (3). The use of innovative treatments is common in many areas of medicine, including surgery, reproductive medicine, and oncology (4, 5). For example, it has been estimated that the majority of advances in surgery are the result of innovation, not clinical trials (6). At the same time, the use of innovative treatments raises important ethical challenges. By deviating from standard care, innovative treatments can pose significant risks and offer uncertain benefits. Unchecked, innovative practice can lead to the dissemination of ineffective, even harmful, interventions.

In response to these concerns, commentators argue that innovative treatments should be used no more than a few times before they are tested in formal clinical trials, at which point the interventions found safe and effective can be offered to patients (7). To enforce this approach, some institutions place a numerical cap on the number of patients, often as few as 3, who may be treated with innovative treatments before they are subjected to clinical trials (8, 9). This approach, which has become the standard paradigm for the development of new interventions, ensures that they undergo rigorous testing before being offered to patients and disseminated widely.

Yet, initiating formal trials can take time, sometimes years, resulting in lost opportunities for patients who need treatment urgently (10) and frequently involves randomizing some patients to a no-treatment control group. Mandating that new interventions are evaluated in formal trials, before clinicians have enough experience using them, can also make it difficult to determine which version to test. Which dose? What dosing schedule? Preclinical testing in the laboratory and in animals, along with phase I trials, sometimes provides sufficient information to answer these questions. In other cases, the standard paradigm can lead to premature testing, wasted resources, and the rejection of interventions that would have been found safe and effective if administered in a different way: "Outcomes from trials begun too soon after the introduction of a procedure may reflect a lack of sufficient experience with the new technique and not measure efficacy of the procedure when done after the learning curve flattens" (11). For example, experience revealed that a simple change in the dose of corticosteroids was much more effective at reducing mortality in COVID-19 infections (12, 13). And it has been argued that the majority of effective surgical procedures would have been rejected if they had been subjected to formal clinical trials before surgeons had enough experience to determine how to perform them (14).

These observations suggest that,...

Source Citation

Source Citation   

Gale Document Number: GALE|A677389428