FiveThirtyEight recently had a terrific article about how difficult it is to apply statistical methods to nutrition studies. In a nutshell, the issues are confounding with observational studies, observer bias, and p-hacking. I recommend reading the whole thing: http://fivethirtyeight.com/features/you-cant-trust-what-you-read-about-nutrition/
Here’s a nice article about the inherent difficulties faced by nutritional scientists and the perils of observational studies (as opposed to controlled experiments): http://www.slate.com/articles/life/food/2015/04/nutritional_clinical_trials_vs_observational_studies_for_dietary_recommendations.single.html
Most of our devout beliefs about nutrition have not been subjected to a robust, experimental, controlled clinical trial, the type of study that shows cause and effect, which may be why Americans are pummeled with contradictory and confounding nutritional advice.
Many nutritional studies are observational studies, including massive ones like the Nurses’ Health Study. Researchers like Willett try to suss out how changes in diet affect health by looking at associations between what people report they eat and how long they live. When many observational studies reach the same conclusions, Willett says, there is enough evidence to support dietary recommendations. Even though they only show correlation, not cause and effect, observational studies direct what we eat.
Apart from their inability to determine cause and effect, there’s another problem with observational studies: The data they’re based on—surveys where people report what they ate the day (or week) before—are notoriously unreliable. Researchers have long known that people (even nurses) misreport, intentionally and unintentionally, what they eat. Scientists politely call this “recall bias.”
The coupling of observational studies and self-reported data leads some observers to the conclusion that we know neither how Americans do eat nor how they should eat. A recent PLOS One article even suggests that several national studies use data that is so wildly off base that the self-reported caloric intake is “incompatible with survival.” If people had eaten as little as they reported, in other words, they would be starving.
Peter Attia, a medical researcher and doctor, started questioning the basis of dietary guidelines when he saw that following them didn’t work for his patients. They didn’t lose weight, even when they virtuously stuck with their diets. When he took a look at the research supporting the advice he was giving to his patients, he saw shoddy science. Attia estimates that 16,000 nutritional studies are published each year, but the majority of them are deeply flawed: either poorly controlled clinical trials, observational studies, or animal studies. “Those studies wouldn’t pass muster in another field,” he told me.
For years, I’ve used the following clip in my Applied Statistics class when introducing randomized controlled experiments and observational studies. It was a big hit every single time.