Drug-Approval Clinical Trials in the Age of Precision Medicine: The Promise of Adaptive Trials
In this paper, Peter Huber and Roger Klein explore how adaptive clinical trials could transform how medicines are prescribed by doctors, allowing doctors to harness ‘precision medicine’ to develop better and more individualized treatment plans for their patients.
Contributors
This paper was the work of multiple authors. No assumption should be made that any or all of the views expressed are held by any individual author. In addition, the views expressed are those of the authors in their personal capacities and not in their official/professional capacities.
To cite this paper: Peter W. Huber, et al., “Drug-Approval Clinical Trials in the Age of Precision Medicine: The Promise of Adaptive Trials”, released by the Regulatory Transparency Project of the Federalist Society, September 19, 2018 (https://rtp.fedsoc.org/wp-content/uploads/RTP-FDA-Health-Working-Group-Paper-Clinical-Trials.pdf).
The last profoundly transformative revolution in medical science occurred in the late nineteenth century, when Robert Koch and Louis Pasteur began tracking diseases back to infectious germs and, working independently, developed procedures for isolating, cultivating, and then killing or crippling every germ harvested to create an effective vaccine safe enough to administer to healthy people. The tools to isolate and cultivate microbes then also allowed others to test antibiotics that they initially obtained from microbes themselves, followed quite soon by the development of synthetic antibiotics. Together vaccines and antibiotics have since saved many millions of lives.
Medicine is now rapidly going through a second even more revolutionary transition as it descends to the cellular and molecular levels at which many of our diseases are spawned and propelled and develops targeted drugs to interrupt their development.
In brief, precision medicine is founded on the recognition of an obvious though often ignored scientific fact that underlies pharmacology: drugs are chemicals that we design to interact with other chemicals—the ones inside our bodies that, for most the part, interact with each other to maintain our health but sometimes turn against us to propel diseases. At their best drugs interact with our own rogue chemistry to interrupt the chemical pathways that drive diseases.
Molecular medicine that begins with the power to see molecules and analyze how they interact leads to a mechanistic understanding of how diseases operate. It puts pharmacology, on a truly scientific foundation to an extent unlike that of any time in history. It allows us to think about the rest of human chemistry the way surgeons began thinking about blood and oxygen centuries ago. The people in the blue scrubs don’t consult a label when a stretcher crashes through the doors of the emergency room, splattering blood on the ceiling—they just clamp and sew as fast as they can, because they know that blood-oxygen-brain means life.
Historically, the FDA almost always required and relied on a side-by-side, statistical comparison of the health of two crowds to decide whether a drug can safely and effectively cure a disease. Typically, one crowd received the real thing, the other a placebo, sometimes the comparison was instead drug versus drug when a reasonably good treatment was available.
Doctors tracked clinical symptoms. The newly healthy and the still sick, the living and the dead, collectively voted the drug up or down. This continues to be FDA’s “gold standard” approach to therapeutic drug approvals. However, these protocols were formulated long before modern medicine acquired the sophisticated tools that are used today to diagnose disorders and design drugs to treat them. They are not feasible when science breaks crowds into many small groups. They are also unjustifiably slow.
In 1938, the U.S. Public Health Service tested a pertussis vaccine in Norfolk, Virginia, in what is thought to have been the first randomized, double-blind trial of any pharmaceutical product. Eight years later, British researchers conducted what may have been the first double-blind placebo-controlled trial of a curative drug, an antibiotic called streptomycin. In the early 1960s the FDA followed their lead, and these were the protocols that would soon be adopted and came to be called the gold standard for drug testing.
But these protocols lead to what can, at best, be called crowd-science medicine. Anchored as they are in one-dimensional statistical correlations though, they are almost all crowd. Importantly, they fail to account for individual variation that is essential to predicting whether a drug will work or cause dangerous side effects in each person.
The protocols are, in short, antithetical to the development of “precision” medicine. As Dr. Janet Woodcock, currently the head of the FDA’s Center for Drug Evaluation and Research, noted over a decade ago, molecular biomarkers “are the foundation of evidence based medicine—who should be treated, how and with what.….Outcomes happen to people, not populations.”1
Precision medicine is inherently personal. The treating doctor and the patient are the only ones who have direct access to the information required to prescribe drugs with molecular precision. We will greatly accelerate, improve, and lower the cost of the drug-approval process by relying much more heavily on the experiences of doctors who specialize in the treatment of complex diseases.
Some oncologists are already using new approaches to study specially designed drugs and their applications, implementing a learn-as-you-go, patient-by-patient process as they literally practice personalized precision medicine. Tumor biology is analyzed, and drugs that have been designed to target specific molecular targets are prescribed only to patients whose tumors contain those targets.
Data gathered from patients is stored in large databases and sophisticated algorithms then analyze the data and recommend optimum treatments for future patients. This process also allows physicians to prescribe off-label treatment regimens when biologically warranted regardless of how the drug was tested during the FDA approval process.
As required by the federal drug law the FDA certifies a drug’s future safety and efficacy only insofar as the drug is used “under the conditions of use prescribed, recommended, or suggested in the labeling thereof.” No drug gets licensed without a label, and the label is where the FDA, in effect, attempts to license future patients. A 1962 law demanded “substantial” evidence, derived from “adequate and well-controlled” clinical trials. Thus, no drug is officially deemed safe or effective until the FDA concludes that a body of evidence has been amassed and collapsed onto a label that is sufficient to allow doctors to prescribe the drug to the right patients. However, irrespective of the contents of the FDA-sanctioned label, doctors can use their medical judgment to prescribe drugs in accordance with advances in scientific knowledge.
In one off-label trial initiated by oncologists at New York’s Memorial Sloan Kettering Cancer Center, a kidney cancer drug failed to help over 90 percent of the bladder cancer patients to whom it was prescribed. But it did wipe out the cancer in one 73-year old patient. A genetic analysis of her entire tumor revealed a rare mutation that made her cancer sensitive to the molecular pathway that the drug modulates. Similar mutations were found in about 8 percent of the patients, and the presence of the mutation correlated well with the cancer’s sensitivity to the drug.2
In 2013, the National Cancer Institute (NCI) announced its Exceptional Responders Initiative. Two entities are molecularly profiling tissue samples collected during clinical trials of drugs that failed to win FDA approval or from patients who responded to approved drugs that are not usually effective in that person’s tumor type. The goal is to identify biomarkers that distinguish the minority of patients who did respond well, from the majority who did not.3
The analysis of roughly a decade of prior trials has resulted in the identification of and acceptance of tissue from over 100 exceptional responders. As of November 2017, the pilot study is no longer accepting new samples, while the first 100 cases are analyzed. received Accepted tumor tissue samples are undergoing “whole-exome, RNA, and targeted deep sequencing to identify potential molecular features that may have accounted for the response.” 4 When the molecules that distinguish the exceptional responders align with what the drug was designed to target, these findings could well lead to the resurrection or repurposing of drugs that could have helped many patients over the last decade.
That long delay could have been avoided if analyses had been conducted during the original trials under what are called “adaptive” trial protocols. In brief, adaptive trials gather, track, and analyze patient responses to the drug being tested to identify biomarkers that cause different patients to respond differently to the same treatment regimens. Adaptive trials can also search for and find safety biomarkers by analyzing the patients who suffer side effects. By providing the necessary information for the drug’s label, adaptive trials can lead to the approval of drugs that wouldn’t otherwise be approved because the side effects would have been judged too serious or common. The trial protocols evolve as the trial progresses and the collective understanding of the drug-patient molecular science improves.
Recognizing the importance of gathering data from patients, a sizable number of U.S. oncologists are already engaged in “rapid learning health systems,” a term coined in 2007.5 In 2009 the Institute of Medicine applied this concept to oncology care, convening a workshop entitled, “A Foundation for Evidence-Driven Practice: A Rapid-Learning System for Cancer Care.” The workshop participants proposed a process for continuously improving drug science using data collected by doctors while treating their patients, with a focus on groups of patients not usually included in drug-approval clinical trials.6,7
By 2008, several major cancer centers had established networks for pooling and analyzing data collected by doctors in their regions. The American Society of Clinical Oncology (ASCO) developed CancerLinQ in 2012, and other non-profit and commercial initiatives have been created. These systems are being used to identify new biomarkers, analyze multidrug therapies, conduct comparative effectiveness studies, recruit patients for clinical trials, and guide treatments. Most major academic centers and many commercial providers now offer precision oncology services.
Data that could be acquired in adaptive trials would likely help some drugs make it to market that perform badly on average but could benefit well-defined cohorts of patients. Such drugs would otherwise be prematurely withdrawn because traditional studies include too few patients who are part of the responding cohorts.
In early 2005, for example, Iressa became the first cancer drug to be withdrawn from the U.S. market after follow-up trials that were required under its accelerated approval for a subtype of lung cancer failed to confirm the drug’s worth to the FDA’s satisfaction. Iressa failed to establish that it extends average patient survival in additional trials, while serious side effects surfaced in some patients. Therefore, the manufacturer halted testing in the United States.
However, doctors knew that Iressa survival times and side effects varied widely among patients, and they had a pretty good idea why. As ASCO Past President Bruce Johnson, a researcher at Boston’s Dana-Farber Cancer Institute and one of the doctors involved in the original Iressa trials, remarked in 2005, “For us as investigators, at this point, there are at least 20 different mutations in the EGF receptors in human lung cancers, and we don’t know if the same drug works as well for every mutation…which is why we want as many EGFR inhibitor drugs available as possible for testing.”8
When the FDA rescinded Iressa’s license, it allowed U.S. patients already benefiting from its use to continue using it. One such patient who started on Iressa in 2004, when he had been given two to three months to live, was still alive eight years later, and walking his dogs several miles daily. Rare cases like his have little influence at the FDA but are of great interest to doctors and researchers and can end up saving the lives of many other patients who can be treated with the drug because doctors now know how to find out which patients will respond well to it.
The knowledge was also of great interest to Iressa’s manufacturer. The company subsequently designed studies on lung cancer patients whose tumors had mutations in EGFR. FDA gave Iressa Orphan Drug Designation in August 2014 for a category of patients with advanced lung cancer and mutations in EGFR. Less than a year later, FDA approved Iressa as first line therapy for this class of lung cancer patients.
The trial protocols the FDA adopted in the 1960s to assess efficacy are also, in light of the technologies now available, unnecessarily slow. They have traditionally required that a new drug demonstrate positive clinical effects, which has meant conducting trials that can’t be completed faster than the time it typically takes for the targeted disease to progress to clinical symptoms.
Which brings us to why doctors who specialize in treating complex diseases are becoming increasingly confident that they can and should work out how to practice precision medicine independently, without relying on FDA-approved labels. In brief, it comes down to two things. Researchers have developed the tools needed to work out the details of how molecular processes that go wrong deep inside our bodies spawn and propel diseases, and drug designers have developed a remarkable array of tools to design precisely targeted drugs that can disable or control those pathways. When doctors or patients can monitor problems at the molecular level, they can track how well or badly the solution is working and adjust treatment on the fly.
As one pioneering oncologist discusses in his recent book, (2015) The Death of Cancer, “As a result of the war on cancer, the cancer cell is no longer a black box; it’s a blueprint. And we can read blueprints. We understand the cancer cell’s stages, how it thinks, what drives it. And we have the tools to attack each of these steps on the way to malignancy. There isn’t a question about cancer we can’t address at this point, without expectation of a usable answer.”9
Microbiologists, according to a 2011 count, had sequenced 1,554 bacteria, 2,675 viral species, some 40,000 strains of flu viruses, and more than 300,000 strains of HIV.10 The number of sequenced bacterial species has since exploded to over 44,000, and the National Center for Biotechnology Information (NCBI) Viral Genome database contains nearly 7,500 reference genomes.11,12 Each genome, in the words of Dr. David A. Relman, a professor of medicine, microbiology, and immunology at Stanford, provides a “master blueprint of a microbe”—it is “like being given the operating manual for your car after you have been trying to trouble-shoot a problem with it for some time.”13
Used in connection with modern drug development tools, analyses like these allowed researchers to develop a group of antiviral drugs to defeat HIV in a remarkably short time (and useful drugs for other viruses). As the National Academy of Sciences observed in 2000, the extraordinarily fast development of drugs that ended up in the cocktails that are now used to control HIV had a “revolutionary effect on modern drug design.”14
The power to read the code that choreographs life is transforming medical science in four fundamental ways. By exposing the complex biochemical diversity that often lurks in different patients underneath a single set of clinical symptoms, it disassembles and redefines diseases. By revealing how diseases are spawned and propelled at the molecular level, it reveals drug targets, which serve as templates for drug design. By explaining why a drug performs well in some patients but not others, it provides precise biochemical criteria for selecting future patients who can benefit from use of a drug. And by transforming the definitions of diseases, the design of drugs, and the way they are prescribed, the power to read and understand the molecular code also transforms how Washington should regulate drug science and economics. Knowledge obtained from the genetic code will revolutionize how doctors practice medicine, and how the rest us think about our health and how to maintain it.
In our generation, biochemists have acquired the tools to gather reams of molecular data about the rogue human cells and microbes that propel the diseases that kill or incapacitate us. They have also developed a remarkable array of new tools for designing precisely targeted drugs. Advances in structure-based drug design, monoclonal antibodies, and, most recently, gene-editing technologies have given biochemists the tools to design drugs that can modulate specific molecular targets or reprogram immune system T cells and stem cells that protect, repair, and maintain tissues throughout our bodies.
The molecular etiologies of diseases as defined by their clinical symptoms can be very complex and can vary a lot from patient to patient. That fact alone can have a large impact on how a drug performs.
The National Institute of Health’s (NIH’s) 1000 Genomes Project reported in 2015 that its study of twenty-six population groups in Europe, Africa, East Asia, and the Americas had identified almost eighty-five million “single nucleotide polymorphisms” (“SNPs”)—single letter variations—in their DNA. Another study, completed in 2012, analyzed SNPs in the potential “drug target genes” of fourteen thousand individuals thought to be particularly susceptible to heart attacks, strokes, obesity, and other health problems.
On average, each study subject was found to carry approximately fourteen thousand SNPs, about twelve thousand of which were exceedingly rare. Each had an estimated three hundred genes containing variants found in less than 0.5 percent of the population that would probably disrupt a protein’s structure in ways likely to undermine health and affect how the protein would respond to targeted drugs.
To further complicate the picture some of our diseases—cancers most notably involve cells that mutate rapidly and thus quickly learn to evade targeted drugs prescribed to treat them. Late-stage cancers mutate so fast that they are rarely beaten by a single drug—combination drug cures are required instead. A drug’s performance can also depend on how it is metabolized in the patient’s liver or interacts with molecular bystanders in other organ systems to cause unwanted side effects. Here too the molecular chemistry involved in these processes can vary significantly across patients.
Infectious diseases can present that problem as well. HIV isn’t so much a virus as a system for spawning new viruses. It replicates fast—so fast that, though the immune system fights back, it can’t keep up. Each new virion formed has, on average, one mutation in its ten thousand units of nucleic acid code. “As death approaches,” Steve Jones writes in Darwin’s Ghost, “a patient may be the home of creatures—descendants of those that infected him—as different as are humans and apes.”
Tamoxifen was developed, to block estrogen receptors on cancer cells, because estrogen accelerates development of the estrogen-receptor-positive (ER+) forms of both breast and ovarian cancer. Over a decade after tamoxifen was approved, however, studies revealed that the drug itself wasn’t much good at blocking estrogen—the effective blockers are produced when tamoxifen is metabolized by an enzyme in the liver. But significant numbers of women (with the numbers varying significantly across ethnic lines) have two copies of a gene that produces the enzyme in a form that can’t activate the drug, and women who have one copy activate much less of it.15 Although for tamoxifen studies failed to show an effect of genetics on treatment outcomes, differences like these can potentially lead to wide variations in the efficacy of targeted drugs.
Similarly, patients with similar symptoms but genetically or mechanistically different diseases can experience dramatic differences in response to therapies. Recently acquired diagnostic tools have revealed that at the molecular level, many seemingly common disorders as conventionally defined by their clinical symptoms are in fact clusters of biochemically distinct syndromes. The traditional symptom-based definitions of diseases that are used to frame most FDA-approved clinical trials ignore what matters most in modern pharmacology—the same symptoms can be caused by a cluster of different molecular processes, and a precisely targeted drug can control only one them, and thus benefit only one subset of the patients who present those symptoms.
A drug’s efficacy and safety can also depend on a wide range of other molecular factors that are hard to identify in advance. We still speak of “developing a drug,” but “developing the patients” would be more accurate. Both matter, of course—pharmacology isn’t a science of one hand clapping—but all the complex and variable biochemical details lie on the patients’ side of the applause. As the National Research Council (NRC) put it in a 2011 report, precision medicine requires a “new taxonomy of disease.”16
The only way to identify and work out how many variations there are or how most of them affect a drug’s performance is to prescribe the drug to a wide variety of patients and analyze how differences in patient chemistry affect its safety and efficacy. In a tacit admission of the limits of its own trial protocols, the FDA itself helped launch a nonprofit consortium of drug companies, government agencies, and academic institutions to compile a global database of “rare adverse events” caused by drugs and link them to the genetic factors in the patients involved.
As the FDA’s own Dr. Janet Woodcock put it in 2004, drug science is at best a “‘progressive reduction of uncertainty’ about effects—or ‘increasing level of confidence’ about outcomes” that comes with the development of “multidimensional” databases and “composite” measurements of outcomes. The FDA’s current drug-approval clinical trial protocols weren’t designed with the objective of studying patient-side chemistry at all. But without patient-side analysis, the FDA-approved label cannot correctly convey the best science on either efficacy or safety. Hence the rapidly growing number of experts who are calling for radical changes in the clinical trial protocols favored by the FDA.
Some experts in the field have suggested that because of the limits that the standard trial protocols place on what the doctors involved in those trials may do, better information could be obtained by integrating the drug-approval process into the process of treating patients. A 2012 report on “Propelling Innovation In Drug Discovery, Development, and Evaluation” written by the President’s Council of Advisors on Science and Technology pointed out: “Most trials…imperfectly represent and capture…the full diversity of patients with a disease or the full diversity of treatment results. Integrating clinical trial research into clinical care through innovative trial designs may provide important information about how specific drugs work in specific patients.”17
The British government recently announced similar plans to integrate clinical treatment into drug-development efforts on a national scale. As described by life-sciences minister George Freeman, “From being the adopters, purchasers, and users of late-stage drugs, our hospital we see as being a fundamental part of the development process.”
In 2007, the Cancer Biomarkers Collaborative (CBC), a coalition of cancer experts drawn from the American Association for Cancer Research, the FDA, and the National Cancer Institute, started investigating the “growing imperative to modernize the drug development process by incorporating new techniques that can predict the safety and effectiveness of new drugs faster, with more certainty, and at lower cost.” A summary of the conclusions published by the CBC in 2010 noted that “traditional population-based models of clinical trials used for drug approval are designed to guard against bias of selection, which may form the antithesis of personalized medicine, and accordingly, these trials expose large numbers of patients to drugs from which they may not benefit.”18
In a 2013 paper written by two other experts in the field, the authors wrote that “[R]andomized trials can encounter difficulty in the face of increasing knowledge about cancer mechanisms and the rational development of therapeutic drugs…On the other end of the knowledge spectrum, randomized trials can be insensitive to detecting biologically (and clinically) relevant signals because of noise attributable to the heterogeneity of the population examined….[C]ontinuous improvements in understanding of cancer, combined with increasingly sophisticated interventions, may one day make randomized trials difficult to justify…Increasingly, randomized trials will be forced to share the stage with innovative trials that deeply investigate cancer within individuals….Deciphering the general operating principles of cancer will require methods for converting the results of omic testing into candidate networks, and then monitoring how these networks evolve and respond to treatment over time. Success requires discerning biologically relevant signals from vast backgrounds of noise.…On the one hand, researchers cannot progress unless the hypotheses that they develop can be tested. On the other hand, patients with cancer who are not curable by available therapies are poorly served by clinical trials that ignore tumor-specific characteristics and by physicians who have only their intuitions to guide therapy. Cancer research is also poorly served because of the many existing clinical trials from which we currently learn almost nothing.…This new type of clinical trial is likely to be a necessary step on the road to deconstructing cancer.”19
The powerful analytical tools and protocols now available, or under development, can use data networks to recommend treatments—both on-label and off—that would “avoid unnecessary replication of either positive or negative experiments and maximize the amount of information obtained from every encounter.”20 Thus, they would allow every treatment to become “a probe that simultaneously treats the patient and provides an opportunity to validate and refine the models on which the treatment decisions are based.”21
Heterogeneity is a biological phenomenon that is seen in most, if not all, complex human diseases. Other medical disciplines are already following oncology’s lead. For example, the National Institute of Mental Health (NIMH), the world’s largest funder of mental health research, in 2013 announced that it was “re-orienting its research” away from the disease categories defined by psychiatrists in their Diagnostic and Statistical Manual of Mental Disorders. Henceforth NIMH funded researchers have been encouraged to search for molecular pathways that transcend the symptom-based categories.22 In the words of the NIMH’s director at the time “patients and families should welcome this change as a first step towards ‘precision medicine,’ the movement that has transformed cancer diagnosis and treatment.”
Andy Grove, the pioneering founder and for many years CEO of Intel, took on the FDA in an op-ed in the prestigious journal Science in late 2011. The biotech industry, he noted, at that time spent more than $50 billion a year on research that yielded some twenty new drugs. Almost half of all clinical trials were delayed by difficulties in convening a group of patients that met Washington’s rigid requirements, and trial costs were spiraling upward as a result. The FDA’s trial protocols were conceived at a time when science lacked the computing power on which it could then draw. Even then, that power made possible a process that, in its early phases, would enlist patients much more flexibly, to “provide insights into the factors that determine…how individuals or subgroups respond to the drug.…[F]acilitate such comparisons at incredible speeds…quickly highlight negative results.…[and] liberate drugs from the tyranny of the averages that characterize [FDA-scripted] trial information today….As the patient population in the database grows and time passes, analysis of the data would also provide the information needed to conduct post-marketing studies and comparative effectiveness research.”23
Doctors themselves have questioned whether the FDA’s trial protocols remain ethical. In a 2011 essay in the New England Journal of Medicine,24 Dr. Bruce Chabner of Massachusetts General Hospital Cancer Center in Boston noted that in the first phase of one trial, 81 percent of patients with certain mutations responded well to a new drug for metastatic melanoma. In the third phase, patients were nevertheless randomly assigned to receive either the new drug or an older one that had a 15 percent response rate. In Dr. Chabner’s words: “Since we can define subgroups of patients who have high rates of response and disease control in phase 1 trials of targeted cancer drugs, the traditional long pathway to FDA approval of these drugs is increasingly under fire for medical and ethical reasons.”25
The development of databases linking molecular factors to diseases has been underway for several years. By 2013, the director of the Genetic Variation Program in the National Institutes of Health’s National Human Genome Research Institute (NHGRI) estimated that there were “about 2,000 separate databases on specific genes and diseases” addressing genetic links to various diseases.26 The NIH itself has compiled a Cancer Genome Atlas.27 The NIH has been funding many other studies of genetic variations that affect health, among them a project that pools data supplied by a consortium of genetic researchers from around the world. It is also currently working directly with 10 big drug companies and eight non-profit organizations that focus on specific diseases, to unravel the molecular pathways that lead to Alzheimer’s, Type 2 diabetes, rheumatoid arthritis…and lupus—and to investigate new methods to track a disease’s progress that could provide early reads on how a drug is affecting it.28 The NIH announced in 2016 that it was awarding another $55 million for further research on the genetic links to diseases.29
The private sector is also deeply involved. Independent researchers and doctors have set up databases of their own in which they pool and analyze molecular and clinical data collected during the treatment of patients with approved drugs. Increasingly, these databases are being analyzed using software designed to recommend drug prescriptions—on label or off—that match the molecular pathway that is propelling the patient’s disorder with the pathway that a drug was designed to modulate. The managers of these systems and services often receive in return information on how things worked out, and the constant feedback steadily improves the quality of future treatment recommendations.
Google,30 and Illumina, the leading supplier of gene-sequencing machines, among others, recognized the converging, synergistic power of the biochemical and digital revolutions some time ago. And they already have broad access to customers and the tools to collect the data quickly and efficiently. Hence their rapidly rising interest in developing huge databases of molecular and clinical information and analytical engines that can unravel the complex causal chains and identify the signaling systems that propel cancers and other diseases.
New devices now make it quite easy to collect large amounts of genetic and other medically relevant data from many people. Amazon and Google are reportedly in a race to build the largest medically focused genomic databases. According to Google’s genomic director of engineering, Google aims to provide the best “analytic tools [that] can fish out genetic gold—a drug target, say, or a DNA variant that strongly predicts disease risk—from a sea of data.” Academic and pharmaceutical research projects are currently the company’s biggest customers, but Google expects them to be overtaken by clinical applications in the next decade, with doctors using the services regularly “to understand how a patient’s genetic profile affects his risk of various diseases or his likely response to medication.”31
Given enough data and computing power, modern statistical tools can map out complex causal networks, and assess the importance of key nodes and links. In analyzing genomic databases, they have already demonstrated their ability to deal successfully with “hierarchical” pathways, identifying the relatively small number of genomic variations that play dominant roles—as hubs linked to other, less important, variations—and excluding the many variations that play no role at all. An analysis of this kind, for example, provided what has, until recently, been the standard categorization of breast cancers into four subtypes. A more recent analysis of more data revealed at least ten subtypes.32,33 This discovery in turn is providing the basis for a deeper understanding the underlying physiological mechanisms by which breast cancers form and progress, and research on new personalized therapeutic strategies.34,35
Medicine will also benefit from the fact that the statistical tools needed to unravel causal pathways from complex datasets are of great interest in other sectors of the economy. The “overarching goal” of the “Big Mechanism” program recently launched by the Defense Department’s Advanced Research Project Agency (DARPA) is to develop methods to extract “causal models” from large, complex datasets and integrate new research findings “more or less immediately…into causal explanatory models of unprecedented completeness and consistency.” To test these new technologies DARPA has chosen to focus initially on “cancer biology with an emphasis on signaling pathways.”36 It’s a good call, and excellent news for oncology. Viewed from a data analytics perspective, the variability, complexity, and adaptability of cancer cells and terrorists have much in common.
Aligning prescriptions with the molecular etiology of diseases also makes possible the broader use of existing drugs at no additional cost to drug companies or patients. In 1998, doctors struggling to save premature babies, who often have trouble breathing, remembered that a drug widely used by adult men had originally been developed to help blood flow to oxygen-starved heart muscles. The drug in question—Viagra—hadn’t been licensed for that purpose, but with the lives of their tiny patients at stake, doctors tried it anyway. It seems to work quite well and has since been licensed for that use in adults.37
Thalidomide was initially licensed by the FDA to treat leprosy. However, it turned out the drug didn’t attack the leprosy bacterium, it alleviated symptoms that develop when the infection sent the patient’s immune system into overdrive. Dr. Gilla Kaplan, an immunologist at the Rockefeller University in New York, tracked the drug’s mechanism of action to tumor necrosis factor (TNF), one of three intercellular signaling proteins that the drug suppresses. TNF plays important roles in the communication system that our body uses to fight both germs and cancerous human cells. But when engaged in a losing battle, the body sometimes produces too much TNF, which can then cause painful lumps and lesions on the skin. TNF overloads can also cause wasting syndrome, a common condition in the late stages of AIDS. This finding has led to the repurposing of Thalidomide to benefit some patients with HIV infection.38
Over the longer term, in sum, the FDA’s drug approval system should shift from clinical symptom-based approvals and labeling to molecular indication based labeling. It will be based on information collected in adaptive clinical trials, with additional data obtained during the post-market period that will progressively improve clinicians’ ability to prescribe drugs with high precision in the safest and most effective manner concomitant with each patient’s molecular profile. The doctors and medical centers that have already developed and begun to use rapid learning databases and analytical systems should review their protocols and analytical tools with the FDA and cooperate in the development of uniform standards that would allow drug companies and the FDA to rely on their work to approve off-label uses and amend labels accordingly.
By developing the molecular science of precision medicine on the fly, adaptive trials also directly benefit the patients who participate in the trials. In contrast to conventional trials which “expose large numbers of patients to drugs from which they may not benefit,” adaptive trials progressively converge on the cohort of patients who are benefitting or are likely to soon benefit from treatment with the drug. By doing so, adaptive trials can reach statistically robust conclusions faster in trials that involve fewer patients. These trials are more likely to culminate in the approval of the drug for treating a specific well-defined cohort of patients, whereas the same drug would have been rejected after conventional trials because too many of the wrong patients were treated with the same drug to the bitter end.
In the 2012 report Propelling Innovation in Drug Discovery, Development, and Evaluation, the President’s Council of Advisors on Science and Technology (PCAST) likewise noted that adaptive trials protocols, which it endorses, “potentially allow for smaller trials with patients receiving, on average, better treatments.” And “integrating clinical trial research into clinical care through innovative trial designs may provide important information about how specific drugs work in specific patients.”39 A further benefit for patients would be addressing the growing “right to try” was endorsed by many states before becoming federal law this year.40 Adaptive trials may also sharply lower the cost of completing traditional clinical trials large enough to demonstrate statistically significant effects in patients with extremely rare diseases.
Much of the concern about high drug prices is centered on the high prices initially quoted for new drugs. But trials required by the FDA shorten the effective patent life and thus the period during which the drug company can recover the costs incurred in developing a new drug. Therefore, those costs get loaded on the early adopters, the relatively small number of patients who use the drug first. Drug prices would be lowered substantially if development costs were spread over larger cohorts of patients. They can be, and they should be.
With the advent of precision medicine technologies, the economic value of a new drug now has two components, the value of the drug’s design on the one hand, and the value of the databases and prescription protocols developed to inform prescribing doctors of which patients will respond well to treatment with the drug on the other. The companies that rush in to sell cheap generic alternatives to a new drug when its patent expires play little if any role in developing the prescription-related information, but their substitute drug will be of little value without it. Requiring them to contribute retroactively to the costs of its development could allow the pioneers who developed the drug and the precision medicine prescription protocols to recover some of their costs from the follow-up manufacturers who otherwise get a free ride on their coattails.
In brief, adaptive trials gather, track, and analyze patient responses to the drug being tested to identify biomarkers that cause different patients to respond differently to the same treatment regimens. Adaptive trials can also search for and find safety biomarkers by analyzing the patients who suffer side effects. By providing the necessary information for the drug’s label, adaptive trials can lead to the approval of drugs that wouldn’t otherwise be approved because the side effects would have been judged too serious or common. The trial protocols evolve as the trial progresses and the collective understanding of the drug-patient molecular science improves.
Footnotes
1 A Framework for Biomarker and Surrogate Endpoint Use in Drug Development, PowerPoint presentation, November 4, 2004, slides 8, 36, https://slideplayer.com/slide/5869589/ (last visited August 30, 2018).
2 Gopa Iyer et al. Genome sequencing identifies a basis for everolimus sensitivity, 338 Science 221 (2012).
3 National Cancer Institute, The Exceptional Responders Study (last updated July 30, 2018), https://dctd.cancer.gov/majorinitiatives/NCI-sponsored_trials_in_precision_medicine.htm#h06 (last visited August 30, 2018).
4 Id.; https://www.cancer.gov/news-events/cancer-currents-blog/2017/exceptional-responders-progress (last visited August 30, 2018); Personal communication with Barbara A. Conley, MD, Associate Director of NCI Cancer Diagnosis Program (August 27, 2018).
5 Lynn M. Etheredge, A Rapid-Learning Health System, 26 Health Affairs w107 (2007) (last visited August 30, 2018).
6 Amy P. Abernethy et al. Rapid-Learning System for Cancer Care, 28 J. Clin Oncol 4268 (2010).
7 Jeff Shrager and Jay M. Tenenbaum, Rapid Learning for Precision Oncology, 11 Nat. Rev. Clin. Oncol. 109-18 (2014).
8 Peter W. Huber, The Cure in the Code 106 (2013).
9 Vincent T. DaVita, Jr. The Death of Cancer, 254-255 (2015).
10 David A. Relman, Microbial Genomics and Infectious Diseases, 365 New Eng. J. Med. 347-357 (2011).
11 https://bacteria.ensembl.org/species.html (accessed August 27, 2018) (last visited August 28, 2018).
12 https://www.ncbi.nlm.nih.gov/genomes/GenomesGroup.cgi?taxid=10239 (last visited August 28, 2018).
13 Relman, supra note 9.
14 National Academy of Sciences, Beyond Discovery (2000), http://www.nasonline.org/publications/beyond-discovery/protease-inhibitors.pdf (last accessed August 30, 2018).
15 Deirdre P Cronin-Fenton et al., Metabolism and transport of tamoxifen in relation to its effectiveness: new perspectives on an ongoing controversy, 10 Future Oncol 107-122 (2014).
16 National Academy of Sciences, National Research Council, Toward Precision Medicine: Building a Knowledge Network for Biomedical Research and a New Taxonomy of Disease (2011).
17 President’s Council of Advisors on Science and Technology, Propelling Innovation In Drug Discovery, Development, and Evaluation (2012), https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/pcast-fda-final.pdf (last visited August 30, 2018).
18 Samir N. Khleif et al., AACR-FDA-NCI Cancer Biomarkers Collaborative Consensus Report: Advancing the Use of Biomarkers in Cancer Drug Development, 16 Clin Cancer Res 3299-3318 (2010).
19 C. Anthony Blau and Effie Liakopoulou, Can we deconstruct cancer, one patient at a time? 29 Trends Genet 6-10 (2013).
20 Shrager and Tenenbaum, supra note 7.
21 Id.
22 Post by Former NIMH Director Thomas Insel: Transforming Diagnosis (April 29, 2013), http://www.nimh.nih.gov/about/director/2013/transforming-diagnosis.shtml, (last visited August 30, 2018).
23 Andy Grove, Rethinking Clinical Trials, 333 Science 1679 (2011).
24 Bruce A. Chabner, Early Accelerated Approval for Highly Targeted Cancer Drugs, 364 New England Journal of Medicine 1087–1089 (2011).
25 Id.
26 National Human Genome Research Institute, New NIH-funded resource focuses on use of genomic variants in medical care (September 25, 2013), https://www.genome.gov/27555151/2013-release-new-nihfunded-resource-focuses-on-use-of-genomic-variants-in-medical-care/ (last visited August 30, 2018).
27 National Cancer Institute and the National Human Genome Research Institute, The Cancer Genome Atlas, http://cancergenome.nih.gov (last visited August 30, 2018).
28 National Institutes of Health, NIH, industry and non-profits join forces to speed validation of disease targets (February 4, 2014), https://www.nih.gov/news-events/news-releases/nih-industry-non-profits-join-forces-speed-validation-disease-targets (last visited August 30, 2018).
29 Thomas M. Burton, Obama Administration Awards $55 Million for Research on Genetic Links to Disease, Wall Street Journal (July 6, 2016), http://www.wsj.com/articles/obama-administration-awards-55-million-for-research-on-genetic-links-to-disease-1467849601 (last visited August 30, 2018).
30 Ron Amadeo, Google X’s “Baseline Study” Applies Big Data Techniques to Healthcare, Ars Technica (July 25, 2014), http://arstechnica.com/business/2014/07/google-xs-baseline-study-applies-big-data-techniques-to-healthcare/ (last visited August 30, 2018).
31 Sharon Begley, Amazon, Google race to get your DNA into the cloud, Reuters Business News (June 5, 2015), https://www.reuters.com/article/us-health-genomics-cloud-insight/amazon-google-race-to-get-your-dna-into-the-cloud-idUSKBN0OL0BG20150605 (last visited August 30, 2018).
32 Samuel Aparicio, Molecular Subtypes of Primary Breast Cancer, Department of Molecular Oncology, British Columbia Cancer Agency Vancouver (2013), http://e-syllabus.gotoper.com/_media/_pdf/MBC13_02_0845_Aparicio_Final.pdf (last visited August 30, 2018).
33 Christina Curtis et al., The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups, 486 Nature 346-352 (2012).
34 Vera Cappelletti et al., Metabolic Footprints and Molecular Subtypes in Breast Cancer, Disease Markers (Epub Dec 24, 2017), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5757146/pdf/DM2017-7687851.pdf (last visited August 30, 2018).
35 Ji Hyun Park et al., How shall we treat early triple-negative breast cancer (TNBC): from the current standard to upcoming immuno-molecular strategies, 3 ESMO e000357 (Epub May 3, 2018), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5950702/pdf/esmoopen-2018-000357.pdf (last visited August 30, 2018).
36 Steve Jameson, Defense Advanced Research Project Agency, https://www.darpa.mil/program/big-mechanism (last visited August 30, 2018).
37 Ben Whitford, Babies Who Take Viagra, Newsweek (July 24, 2005), https://www.newsweek.com/babies-who-take-viagra-121609 (last visited August 30, 2018).
38 Gilla Kaplan et al., Thalidomide for the treatment of AIDS-associated wasting, 16 AIDS Res Hum Retroviruses 1345-55 (2000).
39 President’s Council of Advisors on Science and Technology, supra note 16.
40 Trickett Wendler, Frank Mongiello, Jordan McLinn, and Matthew Bellina Right to Try Act of 2017, Pub. L. No. 115-176 (2018).