Abstract
Understanding the role of nutrition in the prevention of long-latency chronic disease is one of the greatest challenges facing the health sciences field today. The scientific community lacks consensus around how to appropriately generate and/or evaluate the available nutrition data to inform treatment recommendations and public policy decisions. Evidence-based medicine (EBM) is a well-established research paradigm for the evaluation of drug effects. Currently, EBM is arguably being misapplied in order to establish the relationship between nutrients and human health. Nutrients and other bioactive food components are not drugs, and several distinguishing characteristics are overlooked in the design and/or interpretation of nutrition research. Unlike drugs, nutrients work in complex networks, are homeostatically controlled, and cannot be contrasted to a true placebo group. The beneficial effects of nutrients are small and can take decades to manifest. A new paradigm of evidence-based nutrition (EBN) needs to be established that sets criteria and guidelines for how to best study the effects of nutrients in humans. EBN must consider the complex nuances of nutrients and bioactive food components to better inform the design and interpretation of nutrition research. Practitioners, researchers, and policy makers will be better served by a nutrition-centered framework suited to assess the totality of the available evidence and inform treatment and policy decisions. Several recommendations for guidelines and criteria that could help define the EBN research paradigm are discussed.
Introduction
There is general agreement within the nutrition science and practitioner communities that one’s diet, nutritional status, and lifestyle can substantially predispose one to (or protect against) many chronic diseases and other conditions, including heart disease, diabetes, and cardiovascular disease. For decades, the US government has invested, and continues to invest, enormous resources to support programs such as the Dietary Guidelines for Americans1 and the Institute of Medicine’s (IOM) Dietary Reference Intakes2 to develop recommendations for diet and nutrient intake levels that will, among other things, reduce chronic disease risk within the population. The nutrient-chronic disease relationship is also addressed by the Food and Drug Administration (FDA) when it reviews Health Claim and Qualified Health Claim petitions,3 both of which are viewed as broad public health statements. But many questions unique to nutrition still remain when it comes to evaluating the evidence on which these and other recommendations are based. Although a research paradigm for the evaluation of drug effects—evidence-based medicine (EBM)—has been established for years,1 the amount, level, and scope of scientific evidence, and the interpretation needed to support nutrition recommendations, continue to be of intense debate.4–6 Obtaining this evidence has proved to be challenging due to resource and feasibility limitations. Consensus does not yet exist about how to appropriately generate and/or evaluate the available data to inform clinical and/or public policy decision making. These and other important issues are currently being debated by scientists from government (FDA, NIH, USDA), academia, and industry, as well as among practitioners.
Evidence-Based Medicine Vs. Evidence-Based Nutrition
Unlike pharmaceuticals, which have long been studied under the principles of EBM, nutrition and chronic disease research is in a relative state of infancy. Nutrition researchers have yet to establish clear criteria and guidelines for how best to study the effects of nutrients in humans, and subsequently how to evaluate those findings—in other words, what constitutes evidencebased nutrition (EBN). In the absence of such guidelines, the long-established principles of EBM and its strong reliance on randomized, controlled trials (RCTs) have been applied to fill this void. Within this paradigm, expert opinion is given the least weight, while practitioners’ clinical experiences are not even considered part of the evidence base.
The traditional RCT is viewed in the EBM hierarchy as the gold standard for research on cause-and-effect relationships, and its design has been more suited to assess the efficacy and safety of drugs, not nutrients. When designed, executed, and analyzed properly, the results of RCTs can be persuasive and provide a high level of certainty. Such certainty, one could argue, is necessary when assessing the effects of expensive, potent, and potentially dangerous drug therapies. This cost-benefit-risk equation,while appropriate for drugs, is substantially different for nutrients. Several nutrition researchers have, in recent years, raised concerns over what is perceived to be the misapplication of drug-based trials to assess nutrition questions, without taking into account the totality of the evidence or the complexities and nuances of nutrition.5–8 Drugs tend generally to have single, targeted effects; drugs are not homeostatically controlled by the body and can easily be contrasted with a true “placebo” group; drugs can act within a relatively short therapeutic window of time, often with large effect sizes. In contrast, nutrients tend to work in complex systems in concert with other nutrients and affect multiple cells and organs; nutrients are homeostatically controlled, and thus the body’s baseline nutrient “status” affects the response to a nutrient intervention; a nutrient intervention group cannot be contrasted with a true placebo group (ie, “zero”exposure group); and with respect to chronic disease prevention,nutrient effect sizes tend to be small and may take decades to manifest. Finally, the very absence (or inadequacy) of a given nutrient produces disease, which is a fundamental differencecompared to drugs (summarized in Table 1).
* A PubMed search for “evidence-based medicine” resulted in 41,096 publications; the same search for “evidence-based nutrition” resulted in 37 publications. http://www.ncbi.nlm.nih.gov/sites/entrez. Accessed August 10, 2010.
These nuances, while seemingly apparent, have been largely overlooked in the design and/or interpretation of some of the most resource-intensive, high-profile RCTs conducted in recent years. The results of these recently published trials9–13 by EBM criteria has led to conclusions that there is no evidence to support the supplemental nutrient-chronic disease relationship. But given the clear, yet under-appreciated differences between drugs and nutrients, one must ask a series of importantquestions regarding study design, the questions intended to be addressed, and the questions that were actually addressed and whether broad conclusions can be drawn from these studies to serve as the basis for recommendations (or lack thereof ). If blind application of EBM to nutrition questions is inappropriate, the scientific paradigm within which nutrients should be evaluated needs to be defined.
Table 1. Contrast Between Drugs | |||
Parameter | Drugs | Nutrients | |
Essentiality | None | Essential | |
Inadequacy results in disease | No | Yes | |
Homeostatically controlled by the body | No | Yes | |
True placebo group | Yes | No | |
Targets | Single organ/tissue | All cells/tissues | |
Systematic function | Isolated | Complex networks | |
Baseline “status” affects response to intervention | No | Yes | |
Effect size | Large | Small | |
Side effects | Large | Small | |
Nature of effect | Therapeutic | Preventive |
The Women’s Health Initiative (WHI) trial13 is a glaring example of the difficulties researchers face when conducting large-scale, long-term RCTs examining the effect of supplemental nutrients on chronic disease risk, even when adequate resources are readily available. While well intentioned, the trial (which included multiple arms: calcium and vitamin D supplementation;low-fat diet; hormone replacement therapy) suffered from a host of logistical limitations, including poor compliance, extensive use of supplemental nutrients in the placebo arm (due to ethical constraints), and other administrative difficulties associated with multicenter trials. Because the investigators found themselves caught in an ethical dilemma (WHI was initiated when awareness of the bone-protecting benefits of calcium was just becoming widespread), they could not prevent the use of calcium supplements by the placebo group. The result was a median calcium intake in the placebo group of nearly 1,100 mg/day. Thus, the hypothesis ostensibly tested in the WHI trial was not “low vs. high calcium intake” but “high vs. higher calcium intake.” The erroneous message sent from this multimillion dollar ($625 million), NIH-sponsored trial was that calcium and vitamin D supplementation is not useful for maintaining bone health in post-menopausal women, which is counter to the overwhelming majority of evidence. This has prompted some to question the value of large and expensive RCTs: “The results of the WHI add further evidence that clear answers to questions about the long-term effects of diet on risks of cancers and other major diseases may not be obtainable by large randomized intervention trials, no matter how much money is spent conducting them.”14
Despite this assertion, regard for the principles of EBM and the RCT as the unquestioned gold standard have resulted in the misuse of the WHI trial as part of the evidence base supporting calcium and vitamin Ds effect on fracture risk.
Despite this assertion, regard for the principles of EBM and the RCT as the unquestioned gold standard have resulted in the misuse of the WHI trial as part of the evidence base supporting calcium and vitamin D’s effect on fracture risk. In a recent meta-analysis,16 the WHI study, as a large RCT, was automatically assigned the most weight by far among the 17 studies included in the analysis. This resulted in a skewed effect on fracture risk toward the null (although the combined effects of the other, smaller trials included in the analysis still resulted in a statistically significant 17 combined 12% reduction in fracture risk). Systematic reviews and meta-analyses should be interpreted judiciously and should not be considered on their own as high-level evidence because they are statistically assisted interpretations of primary evidence that carry their own set of limitations and biases.
The Selenium and Vitamin E Cancer Prevention Trial (SELECT)11 is an example of a high-profile RCT whose results have been largely misinterpreted and miscommunicated. The investigators terminated the study early, concluding there was no beneficial effect of selenium and vitamin E supplementation on prostate cancer risk. The form of selenium used in SELECT (selenomethionine) is different from the yeast-based selenium product used in a previous trial, which suggested through a secondary analysis that supplemental selenium could lower the risk of prostate cancer.16 The decision to utilize an alternate form of selenium was apparently driven by the need to use standardized and highly stable material that would maintain consistency throughout the length of a multiyear trial. This could not be achieved with yeast-based selenium, hence the decision to use selenomethionine. This is a common dilemma encountered by nutrition researchers investigating bioactive compounds derived from natural sources, such as fish oil, bovine colostrum, and probiotics, and yet illustrates another way the traditional RCT model does not account for the subtle nuances of nutrition interventions. Furthermore, the subjects enrolled in the study by Clark et al had relatively low baseline serum selenium levels (suggesting they were inadequate or insufficient), whereas the majority of men enrolled in SELECT were relatively replete in selenium. Finally, the men enrolled in SELECT had extremely low risk for prostate cancer—only 1 death due to prostate cancer occurred in the entire cohort, making it more difficult to detect an effect of the intervention. These seemingly minor limitations may have had a major impact on the outcome, an issue that has been inadequately communicated to practitioners and the public.
A more recent example of inappropriate application of EBM to nutrition research comes from the recent study of the effect of antioxidant supplementation on preeclampsia.17 Investigators randomized more than 10,000 women to receive 1,000 mg vitamin C and 400 IU vitamin E daily or placebo between the 9th and 16th weeks of pregnancy and concluded there was no effect of antioxidants on preeclampsia. Analysis of the findings reveals that the majority of the women enrolled in the study (80%) were using multivitamins, which could have affected their baseline nutritional status and, therefore, their response to the supplemental vitamin C and E. Furthermore, vitamin C and E status was not assessed at baseline or during the study, so one cannot know whether these women were truly in need of supplementation. Finally, the premise of the study is that oxidative stress may induce preeclampsia. However, oxidative stress was neither measured at baseline nor during the study, so the “oxidative stress status” of these women was not known; if they were not oxidatively stressed in the first place, it follows that the antioxidant supplements would fail to have an effect. These critical nutritional nuances were overlooked by the investigators and the publishing journal as well. Clinicians should not take the results from this RCT at face value and abandon antioxidant supplementation among this target population, but instead should determine what level of confidence they have that the data from this trial are transferable to the individual patients sitting in their offices.
This “blind faith” in RCTs without consideration of study limitations and quality should be of greater concern than it currently is. A well-designed RCT eliminates variables such as comorbid conditions, concomitant interventions, and assumes individual variability in treatment response will be randomly allocated if the trial is large enough. Conversely, a clinician must carefully consider these same variables when deciding if a particular treatment is suited for an individual patient. From the clinician’s perspective, an RCT may be the best way to determine if a treatment works; however, it reveals little about which individuals will benefit. EBM applies a hierarchy of evidence (with the RCT as the “gold” standard) to guide clinical judgment rather than using clinical judgment as a guide to evidence that is relevant to an individual patient.18 Recommendations, whether they be public health-based or practitioner-patient-based, should be developed from the totality of the available evidence, not on a single study or study design.
Prevention Vs. Treatment
Perhaps one of the most important, but often ignored, differences between the research paradigms for drugs and nutrients is the cost and logistical complexities associated with conducting RCTs. Not taking into account the preclinical research needed for drug development (which is substantially resource intensive, due in part to the number of candidate drugs that do not make it to the market), human trials involving nutrients are far more costly than those for drugs.
* Presentation at CRN’s Day of Science, May 8, 2008. “NCCAM research initiatives focused on prevention” by Josh Berman, MD, PhD, National Center for Complementary and Alternative Medicine, NIH.
Drugs are most often studied in a therapeutic context (ie, to treat, cure, or mitigate a disease or condition), while nutrients are studied with a focus on health promotion or disease risk reduction. These are fundamentally different approaches that have tremendous implications on cost and feasibility. In the context of a RCT, studying treatment of a disease or condition (when all subjects have the disease at baseline) is far less costly than studying the prevention or risk reduction of the disease (when no subjects have the disease or condition at baseline). The subtle effect of nutrients and small effect sizes mean far more subjects are needed to demonstrate statistical significance. It is estimated that the net cost in terms of subjects, duration, and total dollars for chronic disease risk reduction trials exceeds that for therapeutic trials by more than 10-fold (Table 2).* Furthermore, chronic diseases can take decades to develop, so demonstrating a statistically significant and clinically relevant reduction in risk with any intervention requires very long-term trials. It is also important to note that unlike the pharmaceutical industry that funds, designs, and controls its own research, the food and dietary supplement industries must rely almost exclusively on government and/or academically funded studies. This is due largely to the inability or lack of means (legally or financially) for food and dietary supplement firms to develop, maintain, and defend intellectual property. As a result, these firms have little or no exclusivity on the use of research to support marketing efforts. Thus, the profit margins and, ultimately, research and development budgets of food and dietary supplement firms tend to be much smaller than their pharmaceutical counterparts.
The case of beta-carotene is an excellent example of inappropriate application of a therapeutic study design to address a prevention question. Decades ago, observational studies suggested that diets and/or serum high in beta-carotene were associated with a lower risk of certain cancers, including lung cancer. This lead to RCTs published in the mid-1990s (the famous “Finnish trials”19,20) in which lifelong smokers or asbestos workers were supplemented with high doses of antioxidants, such as beta-carotene. The results at the time were shocking: compared to placebo, supplementation with beta-carotene significantly increased the risk of lung cancer in these smokers and asbestos workers. To this day, some people misuse this example is misused to demonstrate that the results of a RCT invalidated earlier epidemiological data. Some clinicians guided by EBM conclude that beta-carotene presents a similar risk of increased lung cancer to all patients, including those who do not smoke or have asbestos exposure, and discontinued its use altogether. Indeed, in its evidence-based review system guidance document, FDA touts this example as one that justifies the EBM approach to data evaluation, stating that the results of RCTs “trump” those of observational studies.3 Ignored is the fact that the RCTs in smokers and asbestos workers asked and answered questions different from those of the earlier epidemiological studies. Assessing the effect of lifelong exposure to a modest amount of a nutrient in the context of the whole diet in a general population that is healthy at baseline is completely different from administering a high dose of a single, purified, and isolated nutrient to a very specific population (eg, lifelong smokers) that is not healthy at baseline (because lung cancer was likely well on its way). In the latter case, beta-carotene was studied as a therapeutic drug, not a nutrient. Asking the question of whether beta-carotene can behave like a drug is certainly worthwhile, sometimes necessary. But the design and interpretation of such a study should be vastly different from one that studies a nutritive effect. A quote from a recent editorial on nutrition and cancer summarizes the well intended, but misguided, beta-carotene trials: “By analogy, when keys are missing, it is common to look for them under the lamppost where there is light rather than in the murky location where the keys were more likely dropped.”21
Table 2. Cost Comparison Between Therapeutic and Risk Reduction RCTs* | ||
Therapeutic (drug) trial | Risk reduction (nutrient) trial | |
Those with disease at baseline | 100% | 0 |
Placebo administration | 20% cured (80% still have disease) | 20% acquire disease (80% do not acquire disease) |
Intervention administration—if 25% effective | ¼ of 80% (20%) cured; 60% still have disease | ¼ of 20% (5%) do not acquire disease; 15% acquire disease |
Desired statistical power | a = 0.05, power = 0.8 | a = 0.05, power = 0.8 |
Subjects required per group | 64 | 714 |
Cost ($) | 1.3 million | >15 million |
* Based on presentation at CRN’s Day of Science, May 8, 2008. “NCCAM research initiatives focused on prevention” by Josh Berman, MD, PhD, National Center for Complementary and Alternative Medicine, NIH.
The Double Standard
A number of public health recommendations urge Americans to increase the consumption of fruits and vegetables in the diet, including the Dietary Guidelines for Americans1 and several FDA-approved health claims.22–24 But the evidence on which these recommendations are based consists almost entirely of observational studies in various forms, not the “gold standard” RCT. With a few exceptions, such as the DASH trial,25 there are almost no RCTs that demonstrate chronic disease risk reduction from fruit and vegetable intake, and researchers still cannot definitively conclude that it is the presence of fruits and vegetables in the diet or displacement of other foods that is responsible for the observed effects. Yet few would debate that fruit and vegetable consumption is important for health and can lower one’s risk of chronic disease. The apparent double-standard—when a strong recommendation arises from what is perceived as “poor quality” data—is more likely due to some of the practical constraints already mentioned than a lowering of scientific standard. RCTs involving whole foods or diets are extremely difficult to conduct—perhaps even more so than nutrient-based trials, but for some of the same reasons (eg, ethical issues, no true placebo group, compliance). The key for policy and regulatory scientists has been the consistency of the relationships demonstrated in food-based epidemiological studies. Despite the apparent incongruent findings of 2 recent, large prospective studies showing no relation between fruit and vegetable intake and cancer outcomes,26,27 the totality of the evidence continues to be in support of a beneficial effect with respect to chronic disease when assessed. As with the case of smoking (there are no RCTs that show smoking causes lung cancer, but the cause-and-effect relationship is well accepted due to the consistency of observational data), the association of fruit and vegetable consumption with positive health outcomes has been very consistent.
In 2006 NIH held a state-of-the-science conference on multivitamins and chronic disease prevention.28 Despite a lengthy list of observational studies suggesting the use of multivitamins is associated with a variety of health benefits including lower chronic disease risk, the expert panel concluded that it could not recommend for or against the use of multivitamins for reduction of chronic disease risk. This conclusion was inevitable in light of the fact that the panel used a strictly EBM approach, excluding all observational data and relying solely on RCTs (achieved after excluding all but 63 of the over 11,200 possible reports in the literature). As scientists, we can only wonder what the conclusions would have been if the panel had been tasked with addressing fruits and vegetables. And if these same panelists were your physicians, they may not advise you to cease smoking because of the lack of RCT data demonstrating that smoking causes lung cancer.
Related to the feasibility and ethical constraints of conducting RCTs, consider the following scenario: Consumption of a nutrient or bioactive or group of these during pregnancy (ie, exposure in utero) is linked to reduction of adult chronic disease risk in the offspring. Such a nutrient-disease relationship could never be “validated” in a RCT because of ethical, resource, and other logistical constraints. This presents a challenge when attempting to base public health or patient recommendations on a sound evidence base. However, the absence of this kind of experimental data should not be an excuse for indecision or inaction. Despite its many limitations, EBM has become the de facto standard for developing guidelines and criteria for medical training, clinical practice, reimbursement decisions, and public policy. EBM’s emphasis on reductionist science, research methodology, and statistical power and concurrent de-emphasis of epidemiological evidence, expert opinion, and clinical experience have left many clinicians wondering: Are we letting the tail wag the dog?18
Testing Single, Isolated Nutrients
In January of 2009, FDA released a final guidance explaining the agency’s evidence-based review system for the evaluation of health claims, in which it states clearly that RCTs “trump” observational studies, demonstrating its adherence to EBM principles.3 Given the difficulties associated with conducting these studies on single, isolated nutrients, industry may need to reconsider single-nutrient health claims altogether. In hindsight, it seems farfetched to have hypothesized that supplementation with a single nutrient can reduce the risk of chronic diseases like cardiovascular disease and cancer. Certainly, this is not an approach taken by integrative medical practitioners. And while the question still remains to be answered—whether certain single nutrients, when provided in supplemental quantities, can on their own reduce chronic disease risk—the research to date suggests this to be a tall order. One obvious reason is that nutrients do not function in isolation. Rather, they function in vast, complex networks (eg, the antioxidant network, the methylation pathway). In addition, today’s medical landscape is dominated by multi-organ, multifactorial, long-latency degenerative and chronic diseases that result, in part, from a complex interplay of genetics, diet, lifestyle, inactivity, stress, and environmental toxins. Studies involving supplementation with single nutrients do not take this complexity into account. There are a few exceptions, such as vitamins D and E and long-chain omega-3 fatty acids; supplementation with these alone has been shown to have beneficial effects on chronic disease risk, immune function, and inflammation. The body’s response to supplemental nutrients depends on its baseline status—the lower the status (or more inadequate) the greater the response. Americans are known to have low status or inadequate intakes of all three of the aforementioned nutrients,29–36which may explain why many supplementation studies have demonstrated positive effects. Interestingly, and unlike single-agent, single-target drug trials, in all of these examples the benefits appear to be through multiple mechanisms, which is another difference between measuring drug vs. nutrient efficacy. Nevertheless, NIH funding of large-scale, long-term RCTs that at present appear to be needed to inform nutrition policy decisions is likely to stall or even decline. This is mainly due to the null results of some recent high-profile trials. Those large-scale trials that are now being funded by NIH, such as the Age-Related Eye Disease Study-2 (AREDS 2)37,38 and the Vitamin D and Omega-3 Trial (VITAL)39 tend to involve multiple nutrients.
“Bioactives”
A challenge for the dietary supplement and functional food industries, amidst the backdrop of EBM as the currently accepted research paradigm, is resolving the quandary of how “bioactives” are to be studied. Also referred to as “nutraceuticals” or “functional ingredients,” these substances are neither drugs nor essential nutrients (although they may be considered “conditionally essential” for some patient populations). They are, however, prevalent in the food supply, in dietary supplements and functional foods, and they do have purported health benefits. An important question regarding assessment of their effects on health and chronic disease risk is whether well-known substances such as flavonols, carotenoids, isoflavones, anthocyanidins, and so on, should be studied like drugs or like nutrients. The answer largely depends on how the body views bioactive substances and how these substances behave in the body (ie, whether or not they are homeostatically controlled). Little is known about aspects of the body’s metabolism and regulation of bioactives, but we do know that in many cases humans have been exposed to them through the diet for millennia and that we have evolved to physiologically depend on some dietary bioactive compounds to function in our environment. Examples include emerging evidence showing the long-chain polyunsaturated fatty acid, DHA, being utilized as a chemical messenger that signals resolution of inflammation40 and how the carotenoids lutein and zeaxanthin from green leafy vegetables protect the eyes from oxidative stress and the high-energy photons of blue light.41 It is suggested that bioactives behave more as nutrients than drugs, and hence may require a different research paradigm to assess their impact on health.
Importance Of Biomarkers
The single greatest barrier to researching the role of nutrition in health promotion and chronic disease prevention is the paucity of biomarkers validated as surrogates for disease and wellness endpoints. A surrogate endpoint is a biomarker that, if modified, directly modifies the risk of the endpoint itself. Having the ability to rely on surrogate endpoints dramatically improves the feasibility of human trials, both in terms of duration and total cost. As far as health claims are concerned, FDA has denied several in part because the studies submitted in support of the petitions relied on non-validated biomarkers as surrogate endpoints for disease.42–44 In the absence of validated biomarkers as surrogates for disease, study outcomes must assess the disease endpoint(s) directly, rendering assessment of the effects of nutrients or food components on disease risk extremely lengthy and costly. To date, FDA has relied on advice from authoritative bodies, such as IOM or NIH, as to which biomarkers are validated surrogate endpoints. The current accepted list is disappointingly brief and has changed little in the past decade (Table 3). FDA recognizes this deficiency and, in 2009, funded an IOM expert committee to examine this issue and develop a scientific framework on which validation of biomarkers should be based. The committee report, released in May 2010,45 stresses 3 steps for biomarker evaluation, including analytical validation, qualification, and utilization. The recommendations for biomarker validation make it abundantly clear that the process will be both time- and resource-intensive. It may not be sufficient for FDA to simply apply the framework to the scientific literature to determine which biomarker candidates can be validated as new surrogate endpoints for disease, since much more research is clearly needed to establish existing biomarkers as legitimate candidates. The IOM report is a positive step in the right direction, but it will be years before a significant number of new surrogate endpoints are added to FDA’s “recognized” list.
Table 3. Biomarkers That Are Recognized (and Not Recognized) by FDA as Surrogate Endpoints for Chronic Disease | |
Chronic disease | Surrogate endpoint—Recognized |
Cardiovascular disease | LDL-C |
Colon cancer | Blood pressure |
Osteoporosis | Polyps |
Diabetes | Bone mineral density, fracture |
Dimentia | Blood sugar/insulin resistance |
Cognitive decline | |
Surrogate endpoint—Not recognized | |
Cardiovascular disease | Serum homocysteine, triglycerides, HDL-C |
Inflammatory factors, CRP, etc. | |
Osteoarthritis | Cartilage deterioration, joint-space narrowing |
Macular degeneration | Macular pigment optical density |
Prostate cancer | Prostate specific antigen |
Various chronic diseases | Single nucleotide polymorphisms (SNPs), other “omics” |
Not addressed in the IOM report is the need for biomarkers of health or wellness. A primary goal of nutrition is health maintenance and promotion, yet no validated biomarkers of health exist. In the search for new biomarkers of health and wellness, investigators are turning to the classical principles of homeostasis, proposing that the term “health” be defined as the ability to adapt to internal and external stimuli or stresses.46 New models are being developed that take into account the complexity and balance of homeostatic mechanisms. These models are based on dynamic processes (systemic inflammation) instead of single endpoints (such as serum LDL-C). A broader and likely more predictive indication of health status may be obtained by measuring the ability of individuals to adapt to a stress (ie, maintain homeostasis). Physiologic challenges such as the oral glucose and lipid tolerance tests, organ function tests, exercise stress, and psychological-stress tests could be incorporated more in nutrition research to better assess a given intervention’s effect on health.
Recommendations
There is no question that the RCT is an important component of the evidence base, whether dealing with medicine or nutrition. No other approach can establish causality, in the latter case, between supplemental nutrients or other food components and chronic disease risk. However, the RCT in its current form is ill-suited to assess the effects of nutrients on chronic disease risk and must be modified if it is to serve as an effective tool for EBN. We need not go as far as to recommend that large RCTs on nutrition and chronic disease be abandoned,47 but a paradigm shift is necessary.
Expectations among nutrition and policy scientists, industry, and practitioners must be redefined. The complex but important nuances of nutrition science need to be incorporated into the design and interpretation of the evidence base (ie, we must move from EBM to EBN).
- Applying the “reductionist” approach of targeting single, isolated nutrients is no longer appropriate. Nutrients (and perhaps bioactives) interact with each other in vast and complex networks (eg, optimizing calcium’s bone-protective effect also requires adequate or optimal vitamin D andprotein, and perhaps vitamin K and magnesium as well; B vitamins function together in the one-carbon metabolism pathway; antioxidants are known to recycle each other in a network). Studying one isolated nutrient, without understanding the contextual biology of the nutrient and its interactions and underlying status of the patient or population, will surely be met with failure.
- A paradigm for assessing the effects of “bioactives” is needed. Whether these are studied as nutrients or drugs must be established to properly inform future regulatory and policy decisions.
- The limitations of the RCT, whether ethical, logistical, or cost-related, clearly render this approach unfeasible and at times worthless under certain circumstances. However, these limitations cannot preclude totally decision making. It is critical to assess the totality of the available evidence in order to make informed decisions for patients and public health, even in the face of suboptimal evidence.
- In most cases of the nutrient-health and disease relationship, the optimal evidence base is not achievable, due to the host of aforementioned limitations and other constraints. However, the absence of optimal evidence should not completely preclude decision making. The important cost-risk-benefit equation is vastly different for nutrients vs. drugs. The low cost, low risk, and modest benefit of nutrients suggests that decisions might still be made in the face of sub-optimal evidence or lesser certainty. Indeed, nutrition science is an ever-evolving continuum (in both directions) that rarelyif ever reaches 100% certainty, with most of the evidence falling somewhere between “uncertain” and “probable.”
- RCTs are still necessary to inform the evidence base, when and where possible. It is important to recognize their limitations and still be willing to take action when RCTs are not feasible, but that is not license to lower the standard of scientific rigor for nutrition science. In general, RCTs involving nutrients should incorporate greater utilization of biomarkers, including those of nutrient exposure/status, both at baseline and throughout intervention, and where applicable, those of surrogate disease and wellness endpoints. Although not discussed in this paper, incorporation of nutrigenomic, proteomic, and metabolomic analyses in the design of RCTs is critical. These may not only serve as surrogates for important phenotypic or clinical endpoints, but also can help define groups of responders and non-responders to a given intervention (both in terms of efficacy and harm). The multisystem characteristic of nutrient effects calls for measurement of multiple outcomes in RCTs. For example, a nutrition intervention, even one involving a single nutrient, might lower blood pressure, affect visual function, decrease biomarkers of inflammation, and enhance insulin sensitivity, among other beneficial effects. Individually, these outcomes, due to inherent biological and individual variability and subtle effect sizes, might tend to be nonsignificant (both clinically and statistically). However, if assessed in the aggregate they might well present an overall “global” benefit. Ideally, such analyses would be incorporated a priori, with the research approach to assess some composite or “global index” of all of the appropriate endpoints (ie, whether a given intake of a nutrient(s) provides a total body health benefit).
- Clinicians should avoid the current trend toward being reduced to technicians who deliver EBM-based algorithms and guidelines. Best practice should include a reliance on clinical experience, evaluation of the best available and most relevant evidence, and the therapeutic relationship between the doctor and patient.