Thursday, October 12, 2006

The Thing About Research: Engendering Some Healthy Paranoia

People love to cite research. Take these fictitious examples: “This study showed that 80% of participants lost 10 or more pounds after taking these pills for one month.” “Another study showed that eating chocolate daily can lead to a long and painful death.” “Some research suggests that if you ingest fewer than 500 calories per day, you’ll live past 100.”

Let’s take a look at a real example: An AOL news item revealed recently that approximately 42% of French people (older than 15) have a “weight problem.” To start, the study was conducted by ObEpi-Roche, defined as a “drugs group” that “makes weight loss products,” such as Xenical. Hmm. . . think they might have a vested interest in showing exactly how fat the French are? How about the survey’s co-sponsors: Sanofi-Aventis and Abott Laboratoties (the manufactures of diet drugs Acomplia and Meridia, respectively)? The AOL article states that, “Campaigns were launched in France last year warning of the health dangers linked to obesity. . . .” It's always important to understand who is funding (either directly or indirectly) the research on obesity. In The Diet Myth, Paul Campos reports that many studies on obesity are conducted by physicians and weight-loss clinics intimately tied to the diet industry. By definition, this obfuscates the possibility of unbiased (read: ethical) research.

On August 27th, 2006, NBC exposed a similar problem in the cancer research arena. They revealed how cancer studies are often funded by pharmaceutical companies and that the drug companies play a large role in the research, often choosing what results will be reported and even writing the papers “authored” by scientists. That is, the researcher conducting the study doesn’t even write up the results (yet, his/her name is used for authorship). I’m concerned that researchers would allow ghost-writers to publish their results—as part of the American Psychological Association’s ethics code, for instance, I’m accountable to standards of practice that obligate me to, along with not publishing research that isn’t mine, avoid having sex with my patients! These are pretty big things.

Furthermore, many studies run multiple analyses as part of the research—in this way, researchers can get creative and choose to publish the results that support their hypotheses. . . and their products. As any amateur statistician can tell you, statistics are more an art than a science, and if you look hard enough (and run enough analyses), you’re bound to find something you hoped to see.

Next point: the study used BMI’s in order to define people as overweight or obese. Recent research has confirmed that the BMI is not an accurate and reliable indicator of weight-related health concerns. Should we still be using it as a measure? What measures do researchers employ in a study, and do instruments show adequate psychometric properties (i.e., are they valid and reliable)?

The informed consumer of research should consider other factors as well, when evaluating study claims. 1) How many participants were in the study? Generally, the more the better. Was it a diverse sample? Did the sample represent you? 2) Were all data used, and if not, how can we explain why certain data were tossed? 3) What types of statistical techniques were used? I won’t bore you here, but techniques can vary in their statistical power. 4) In a true experiment, were the participants and the researcher aware of the experimental hypotheses? If so, that may influence (and artificially inflate) the results. 5) How about that sneaky fellow, the placebo effect? 6) What other factors may contribute to what seems like a causal relationship described by study results? For example, let’s say one product (a diet pill) advertises itself to be proven effective for weight-loss in 90% of all patients. Let’s also say that taking said pill makes you really tired and you end up sleeping significantly more each night. Can we really say that the pill caused weight-loss? Or, did it maybe promote sleep, which on its own would have reduced food intake? Ever notice that (barring the Ambien-binge reports) you kind of eat less when you’re asleep? Insufficient sleep is also linked to disruptions in hunger hormones, such as leptin and ghrelin. Or, let’s look at happiness and exercise. If we find that people who exercise daily are happier, can we say that exercise leads to happiness? Not really. Maybe happy people are simply more inclined to hit the gym. 7) Where was the research published? Peer-review journals are best. Even research that’s really, really bad can be published in a sub-standard publication for a fee.

All kinds of factors need to be considered when evaluating weight-loss research. Unfortunately, the scientific value of studies is often obscured by the emotionality, funds, and media involved. Reading, and consequently citing, a study at face value is often not enough.

5 comments:

PTC said...

It all boils down to $$ with these companies. No one's going to bite the hand that feeds them either.

(As I was reading this post I was thinking, Geez where the hell did this girl get so smart? Love the word "obfuscates." Never heard that one before.)

psychologista said...

My favorite take on this sort of thing was an ad by a hospital promoting exercize:

"Sore feet protect against heart disease"

drstaceyny said...

ptc--good point abt money. Thanks for the compliment. : )

elissa--deep-cleansing pedicures for everyone! ; )

Anonymous said...

and when they do bite the hand that feeds them they get fired

like here
http://www.aaas.org/spp/sfrl/per/per46.pdf
:) Some people are born to fatness. Others have to get there.

Anonymous said...

Thanks for your tips on how to analyze "scientific research." I am currently taking my first statistics class and wish I had taken stats much earlier. Not that I use statistics in my everyday life, but knowing something about them is crucial to being able to evaluate other people's stats-based claims.