When it comes to research in the fields of nutrition and strength and conditioning, most studies report their findings as being “significant” or not. What this means is: did the intervention work or not?

 

But these fields lag behind other important sciences where studies report how much or how well an intervention worked. Applying this important distinction can provide us with a great deal more information when we are making health decisions.

 

Cargo Cult Science

Richard Feynman famously described a cargo cult of islanders in the Pacific during World War II. This group of islanders saw the U.S. military plow down trees, build a runway, and erect an air traffic control center. Following all of this, planes started “falling” from the sky bringing gifts.

 

Breaking Muscle Shop

 

The islanders thought they could follow the same process to get the same results. They built control towers out of bamboo, put fires along the runway, and even a created bamboo headsets for the ground crew. But no planes came. Feynman described this story as a metaphor for many “so-called sciences.” They try to do the same things as “real sciences,” but no matter what they do, the real science does not land.

 

"If the fitness industry wants to tout how its programs are scientifically based, then it would be helpful for the cargo cult science that makes up much of fitness literature to move forward to more modern techniques of science."

This story came to mind as I was reading through a popular strength and conditioning journal. Much of what is published looks like science and feels like science, but it is missing something. For example, I originally intended to review a research article that showed velocity in a three-minute max run is associated with other outcomes of anaerobic capacity. Everyone reading this article could have simply guessed those outcomes are associated. What we need to know is how much or how important the finding is to us.

 

Does It Work?

A big problem in sports and conditioning research is that we rely greatly on null hypothesis testing. Null hypothesis testing is a set of statistical techniques that test the probability that there is no effect (i.e., the null hypothesis). The goal is to knock down that null hypothesis and say, “We think we found something.” It is a yes-or-no type of statement that tells us if something is good or bad for us.

 

A better step would be to tell us how much better an exercise or a nutritional change is for us. Measures called effect size estimates can better get at the question of “how much?” rather than “does it work?”

 

Other scientific studies report how much or how well an intervention worked.

 

Examples

Let’s take the example of aspirin to prevent heart attacks later in life. Physicians often recommend aspirin as previous research has shown it to be effective in preventing heart attacks (answering the “does it work” question). In 2010, my colleague and I calculated the results a different way. We found that 176 patients would have to take aspirin on a daily basis for one patient to avoid experiencing a major cardiovascular event, and 208 patients would have to take it for one person to avoid a myocardial infarction. So, aspirin may not be as effective as we believe, and these are similar numbers to expensive medications. In making decisions about making healthy changes in our lives, it is beneficial to have knowledge of how the results can affect you.

 

"Part of the problem lies in the way researchers are taught statistics. Research methods textbooks are littered with information on the null hypothesis and usually say little on effect size estimates."

Another example is the supplement CLA. In a 2005 study, researchers found a 4.5% reduction in body fat over the 24-month period for the group of people taking CLA. This number sounds as if we are getting closer to the “how much,” but it is a bit more complex. The 4.5% reduction is actually a 1.8kg reduction over the two years. The placebo group lost 1.7kg over the two-year period. If we calculate an effect size estimate, the effect of taking CLA over the placebo is minimal. So, is it worth spending money on a supplement with a minimal effect over the placebo?

 

Part of the problem lies in the way researchers are taught statistics. Research methods textbooks are littered with information on the null hypothesis and usually say little on effect size estimates. Today’s textbooks are still similar to those used in the 1960s (with the exception of some great new textbooks, such as Geoff Cumming’s). But now, many journals are demanding that researchers include measures of effect size and avoid using null-hypothesis testing. As consumers of research, we would benefit greatly from these changes.

 

Maybe we're not so different from Feynman's islanders.

 

Summary

When we think of physics, we think of laws. Physics gives us a statement of “how much.” Newton’s law of cooling states how quickly something will cool, given the initial temperature of the object and the room temperature. Nutrition, as well as sports and conditioning research would benefit greatly from using measures of effect size that will tell us how much or how effective the substance is.

 

If the fitness industry wants to tout how its programs are scientifically based, then it would be helpful for the cargo cult science that makes up much of fitness literature to move forward to more modern techniques of science.

 

More Like This:

 

References:

1. Feynman, Richard P., Ralph Leighton, and Albert R. Hibbs. Surely You’re Joking, Mr. Feynman!. Edited by Edward Hutchings. Reprint edition. New York: W. W. Norton & Company, 1997.

2. Gaullier, Jean-Michel, Johan Halse, Kjetil Høye, Knut Kristiansen, Hans Fagertun, Hogne Vik, and Ola Gudmundsen. “Supplementation with Conjugated Linoleic Acid for 24 Months Is Well Tolerated by and Reduces Body Fat Mass in Healthy, Overweight Humans.” The Journal of Nutrition 135, no. 4 (April 1, 2005): 778–84.

 

Photo 1 courtesy of Shutterstock.

Topic: