by Marlene Merritt, LAc, DOM, ACN
It’s pretty frustrating, isn’t it? Studies contradict themselves all the time, making it hard for us to know what is correct and what isn’t. Margarine is good, then it’s bad, but it must be OK because the American Heart Association still recommends it, right? (Seriously, they do.) Eggs are good, then they’re bad, now they’re back on the upswing again. Vitamins are supposed to protect against cancer, but now they seem to cause cancer[i],[ii],[iii] — how are we to know what to do for ourselves?
Just to give you some context, it is estimated that 90% of the published medical information that doctors rely upon for diagnoses and prescriptions is wrong (JAMA, 2005).[iv] That shocking number should make you seriously pause the next time you see some headline in the paper.
If you want to learn how to discern what is true and what is not, the first thing to do is stop getting your medical information from the television and magazines. Reading headlines or a blurb in the paper, or hearing a 60-second report on television, often distilled through a reporter who has little medical knowledge, insures you will never get the complete or accurate story. Secondly, it helps to know something about the topic — if you do not know anything about, for example, soy, it would help to read more than just websites exclaiming how healthy it is. Reading conflicting points of view from reputable resources will, if your mind is open enough, allow you to gather information from all sides. Thirdly, you want to notice where you’ve decided you “know” something already and are therefore not open to hearing the whole story. I’ve spent the last three years writing articles for Acupuncture Today, trying to break open what we all think to be so true that we won’t even consider another point of view (mostly because I though all those things were true, too, before I started researching). And lastly, practice some critical thinking. This means not automatically believing everything you read and hear.
So what are some of the factors that influence studies? Bias is a big one, and bias comes from a variety of directions:
- Researchers wanting a certain result will surprise! often find that result. At every point in the research, there are places to distort results, to make a stronger claim, or select what is going to be concluded or included.
- Pressure to get funding and tenured positions or keep grants will create opportunities to bias research. The pressure to discover something new and ground-breaking that most probably will be published over studies that find a theory to be false can also cause results to be skewed.
- Deliberate withholding of information from vested interests. The Cochrane Review’s difficulty in obtaining all the studies regarding the effectiveness of Tamiflu resulted in this paper “”The Imperative To Share Clinical Study Reports: Recommendations from the Tamiflu Experience.”[v] The constant exhortation to take Tamiflu to prevent the flu or flu symptoms is simply not borne out by the evidence, but the manufacturer, of course, would prefer you did not know that.
- Financial conflicts of interest, whether from pharmaceutical companies, sponsors, or simply the desire to maintain a grant, can, of course, distort results. The examples of this are, unfortunately, too numerous to mention.
Then there’s the actual research. These are the elements to be looking for:
Observational studies versus random, double-blind, placebo studies. Observational studies are just that — several elements were observed and someone wrote a paper saying they were correlated (NOT causal, but correlated — those are two very different things and are often confused). If observation was all that was needed, we’d be quite certain that the earth is flat and that the sun revolved around us. BE VERY CAREFUL WITH THIS. An observation should lead to a hypothesis which should lead to testing which nearly always leads to a failed test, since there are many wrong hypotheses for every right one, so the odds are against any one in particular being correct. And if the observational study is reported as “fact”, then most people assume it’s correct, which gives us debacles such as HRT — hormone replacement therapy prescribed for preventive health, and instead increased the risk and incidence of heart disease, stroke, blood clots, breast cancer and dementia, injuring or killing tens of thousands of women.
That’s the problem with epidemiology[vi] — and they know it too, hence the article “Epidemiology — Is It Time to Call It a Day?” (Int’l Journal Epidemiology, 2001) and people referring to it as a “pseudoscience”.[vii] There is simply no way to observe a large group of people, balance all the confounders and then say your interpretation is solid (smokers vs. non-smokers, vegetarians vs. meat eaters, different ages, different genders, income disparities, educational differences, exercise, overall initial health — it’s like saying you can balance a vegetarian, non-smoking, exercising, female software engineer from Seattle and a construction worker in Alabama who eats at truck stops. Good luck with that.) They give questionnaires to people and hope that people will tell the truth about what they do and eat (and the likelihood of that is…?) and often those questionnaires are years apart. And yet we read about observational studies like this all the time and, being unable to distinguish good research from bad, assume the conclusions are solid. The book, “The China Study” by T. Colin Campbell (not to be confused with the actual China study) is a good example — I’ll be getting to that in a moment.
Statistics can be skewed many, many ways, but one good example is in risk calculations. Let’s say 100 men take a statin, and 100 men take a placebo. At the end of the study, 2 men on the statin have a heart attack, and 4 men on the placebo do. Absolute risk calculation says that statins reduce the risk of heart attack by 2%. But relative risk calculation sees that half as many men on the statins got a heart attack as on the placebo, and then will say statins reduce your risk of a heart attack by 50%. Which way do you think the study will be reported? (Let’s just say that absolute risk is an unpopular number.) And while absolute risks are buried in the actual studies, relative risk numbers are more often in the abstracts and who wants to read the whole study when the abstract (supposedly correctly) summarizes everything?
Ultimately, if a study claimed some impressive-sounding result, and is disproven later on, many researchers have invested their careers in this area and continue to publish papers on the topic, “infecting” other researchers who read those studies in journals. Then there are researchers who continue to cite the original research as correct, often for years or decades after the original study was refuted. If you can’t keep up with studies, what makes you think M.D.’s can?
The process of “peer-review” is, of course, dependent on the foibles of human beings and therein lies the crux. The journal Science performed a sting operation [viii], in which they submitted a paper from a false research institute, with “numerous red flags” in the paper regarding data that were obvious enough that a peer reviewer taking a few minutes to review the paper would notice them. Of the 304 journals to which this paper was submitted, nearly 160 journals had accepted this bogus paper and 20 were still considering it at time of printing. In fact, some of the journals returned the paper with requests for changes regarding layout, formatting and language, and then STILL accepted the paper.
Then we have the concrete problems with studies. There are studies that came to a “solid” conclusion based on testing 17 individuals. Or ones that fed vegetarian animals (rabbits) something wildly outside of their normal diet (cholesterol dissolved in sunflower oil), and then concluded that eating cholesterol causes heart disease.[ix] Or the demonization of solid fats (with research being done on hydrogenated fats versus healthy, non-adulterated fats), with predictable results of health issues with the hydrogenated fat. Or conclusions like “Meat causes a potassium deficiency” when it turns out the researchers fed the subjects boiled turkey meat, with the potassium being lost in the water, which was thrown out…[x] — clearly, I could go on and on.
Bias in the media is particularly frustrating, as this is the source where most people get their health information. It starts with a research study that says, “A is correlated with B, given C, assuming D, and under E conditions.” which turns into “Scientists find potential link between A and B” and “A Causes B!” and in less reputable news outlets becomes “What You Don’t Know About A Can Kill You!” The reporter is often reading something second or third hand, and possibly adding to the bias. I personally read this blurb in the New York Times[xi] where the author said that the Cochrane Collaboration (which is a group of independent scientists who scrutinize the legitimacy and accuracy of studies) concluded, “Studies do not lend much support to milk thistle’s reputation as a liver protector.” Really? Because that’s not what it said when I looked up the study on Cochrane.org (and really, who does that?) In fact, the authors said this: Our results… highlight the lack of high-quality evidence to support this intervention. Adequately conducted and reported randomized clinical trials on milk thistle versus placebo are needed.”[xii]
The book, “The China Study” is a good example of where a lot of these elements have come to play. “The China Study” (the book, not the actual China study, which said nothing like that) purports that animal protein causes cancer. The studies were done feeding rats casein (a dairy protein), which is known to cause health problems. But you can’t even say that diary proteins cause cancer, because whey protein (another dairy protein) actually protects against cancer.[xiii],[xiv] Instead, Campbell (a known vegan, and this is important because it can contribute to bias) leaps directly to the statement that animal protein causes cancer. Really? So then how does one explain cultures like the traditional Inuit, who, with their permafrost, couldn’t even grow vegetables, and therefore ate basically only meat and fats, and had a cancer rate of .01%? (Of course, they weren’t the only ones). What you also didn’t read in the book was what happened to the rats: the low-protein diets actually prevented the rats’ livers from detoxifying properly, and they died premature deaths of liver failure. This also occurred in the original research that Campbell replicated, but is conveniently left out of the book.[xv] Oh, and you probably didn’t read this: “An examination of the original China Study data shows virtually no statistically significant correlation between any type of cancer and animal protein intake.”[xvi] (That’s from the actual China study.) I can go on quite a bit about this (I have a whole lecture on it) so if you’d like to read more about this topic, please email me.
The conclusion is this: Don’t be a lemming and follow what everyone else says. Or a parrot, in repeating poor information. Start reading more, and reading between the lines. Immediately become cautious when you read words like “possibly” or “may correlate” or “this observational study says…” and remember that an enormous number of studies that you read are wrong, and if you’re just reading the headlines, that’s even worse. Question everything and remember your common sense (how in the WORLD did we fall for margarine?). Take everything with a grain of salt (a pound might be more helpful), and practice your critical thinking, if only because it will make you very entertaining at dinner parties.
[i] Omenn GS, Goodman GE, Thornquist MD, Balmes J, Cullen MR, Glass A, Keogh JP, Meyskens FL Jr, Valanis B, Williams JH Jr, Barnhart S, Cherniack MG, Brodkin CA, Hammar S: Risk factors for lung cancer and for intervention effects in CARET, the Beta-Carotene and Retinol Efficacy Trial.
[ii] The effect of vitamin E and beta carotene on the incidence of lung cancer and other cancers in male smokers. The Alpha-Tocopherol, Beta Carotene Cancer Prevention Study Group. N Engl J Med. 1994 Apr 14;330(15):1029-35.
[iii] Bjelakovic G, Nikolova D, Simonetti RG, Gluud C. Antioxidant supplements for preventing gastrointestinal cancers. Cochrane Database Syst Rev 2004;(4):CD004183
[iv] Ioannides, J. “Why Most Published Research Findings Are False”. PLoS Med. 2005 August; 2(8): e124
[v] Doshi, Peter, Tom Jefferson, and Chris Del Mar. “The imperative to share clinical study reports: recommendations from the Tamiflu experience.” PLoS Medicine 9.4 (2012): e1001201.
[vi] Elm, E. Egger, M. “The Scandal Of Poor Epidemiological Research.” BMJ. 2004 October 16; 329(7471): 868–869
[vii] Smith, G. and Ebrahim, S. “Epidemiology — Is It Time to Call It a Day?” Int. J. Epidemiol. (2001) 30 (1): 1-11
[viii] “Who’s Afraid of Peer Review?” Science 4 October 2013: Vol. 342 no. 6154 pp. 60-65
[ix] Anitschkow N, Experimental Arteriosclerosis in Animals. In: Cowdry EV, Arteriosclerosis: A Survey of the Problem. 1933; New York: Macmillan. pp. 271-322.
[x] Dehaven, J. et al, Nitrogen and sodium balance and sympathetic nervous system activity in obese subjects treated with a low-calorie protein or mixed diet. N Engl J Med, 1980. 302(9): p. 477-82
[xi] O’Connor, Anahad. “REALLY?; Milk thistle is good for the liver.” New York Times, Jan. 12, 2010.
[xii] Rambaldi A, Jacobs BP, Gluud C. Milk thistle for alcoholic and/or hepatitis B or C virus liver diseases. Cochrane Database of Systematic Reviews 2007, Issue 4.
[xiii] Bounous G., et al. Whey proteins in cancer prevention. Cancer Lett. 1991 May 1;57(2):91-4.
[xiv] Hakkak R., et al. Diets containing whey proteins or soy protein isolate protect against 7,12-dimethylbenz(a)anthracene-induced mammary tumors in female rats. Cancer Epidemiol Biomarkers Prev. 2000 Jan;9(1):113-7
[xv] Madhavan, T.V. and C. Gopalan. “The effect of dietary protein on carcinogenesis of aflatoxin.” Arch Pathol. 1968 Feb;85(2):133-7.