Why is Nutrition Research so Difficult?
The Scientific Method 101
Ideally advances in nutrition are made using the scientific method. In case you can’t readily recall your last biology class, here it is in its most simple form:
- The first step is to make an observation, e.g. more people get colon cancer in the U.S. than in Japan
- Next, explanations are proposed and called a hypothesis, e.g. The lower incidence of colon cancer in Japan compared to the U.S. is due to diet differences.
- To test the hypothesis, experiments are designed. Compare the diets and incidence of colon cancer in a Japanese American population compared to a U.S. Caucasian population.
If the results from repeated experiments support the original hypothesis, a theory can be developed. if not, the hypothesis is rejected.
Simple, right? Not in nutrition. Why is nutrition information so confusing or contradictory? Here are some reasons why. This post only presents a few basic problems in nutrition research; it is not meant to be a comprehensive guide.
- You cannot keep people in a bubble.
Evidence-based nutrition should ideally be based on the randomized, placebo-controlled clinical trial using large population samples and extending the intervention for a reasonable time to measure health outcomes. Typically, one group is given a certain diet to follow; another group is given no particular diet but allowed to eat anything they choose. A randomized controlled trial (RCT) can provide sound evidence of cause and effect if the many variables are controlled. These studies are just not practical for long-term nutrition interventions. They are expensive long-term and have another important limitation. People are notorious for cheating on their assigned diets, and compliance is always difficult.
- Most nutrition information comes from observational studies.
Many nutritional studies are observational studies that attempt to assess how changes in diet affect health by looking at correlations or associations between what people report they eat and how many develop a particular disease. When many observational studies reach the same conclusions, there is enough evidence to suggest dietary recommendations. These types of studies only show correlation or associations, not cause and effect.
There is another problem with observational studies. They typically depend on surveys where people report what they ate the day (or week) before. This type of data reporting is known to be extremely unreliable. Researchers have long known that people misreport or forget, intentionally and unintentionally, what they eat.
3. People and foods are different.
People obviously differ physiologically, psychologically, and genetically. This is shown in studies that measured people’s blood sugar responses to the same foods and found vast fluctuations. Also foods differ in quality, content, preparation and other unknown characteristics, e.g. how they were grown or processed.
4. Conflict of interests and bias add to the confusion.
Food companies often try to conduct studies that promote the claimed health benefits of their products. They often use studies that support their claims as major marketing tools.
- Replication as part of the scientific method is often neglected.
There is always a chance that the original results may have occurred due to error. To alleviate this possibility, it is common if possible, to repeat the original experiment multiple times. This also is prudent when the original results are significant or surprising.
What Can You Do?
So, should you just give up on listening to anything about the food you eat? We should be reminded that “all scientific knowledge is subject to change. We learn from it.” We can learn from both negative results as well as positive results.
By looking at the big picture of many studies (meta-analyses) and not just a few on a certain nutrition question, you can begin to see patterns that point to the same direction. It is extremely important to not dwell on single nutrient studies, but on studies that examine the total diet.
Pay attention to the source of funding and the potential biases of the authors. Do not pay attention to bold statements or scaremongering headlines that are not supported by the current research. Is it selling something? Is it based on someone’s personal story (anecdotal)?
Was the information interpreted accurately? Compare the news headlines with the peer-reviewed conclusions of the study information. Did the study discuss its limitations? Were the results or the conclusions of the study twisted in a way to support the bias of the author(s)? Has the importance of the study been exaggerated? Does it make sense?