Friday, May 8, 2009

Spinning the News

Earlier this year, the Web science news site Science Daily ran a story headlined "Mice Stay Lean with High-Carb Diet."

Reading the head, I assumed the story would be about a study showing that mice who followed a high-carb, low-fat diet stayed lean while their littermates who were allowed more fat became obese.

Wrong.

In fact, the story reported a study showing that mice lacking a particular gene were able to stay lean despite being fed a high-carb diet. Researchers said the gene might play a role "in the prevention of obesity related to the over-consumption of high-carbohydrate foods, such as pasta, rice, soda, and sugary snacks."

In other words, the headline said the exact opposite of what the story said. And busy people who only read headlines would come away with the impression that high-carb diets kept mice (and they'd probably assume it also related to people) lean.

Four days later, Science Daily reran the exact same story. But this time the head was accurate: "Mice With Disabled Gene That Helps Turn Carbs Into Fat Stay Lean Despite Feasting on High-Carb Diet." Apparently I wasn't the only one who noticed the bad headline. Perhaps the researchers complained.

This story illustrates the problem that faces anyone who supports a concept that isn't the dogma of the day. Many people, especially reporters in the popular press, buy into the idea that only low-fat, high-carbohydrate diets are healthy. So they interpret everything through those biased glasses.

This means that when science reporters see a research study that supports something they believe in (let's say that red meat is bad for you) they'll read it, write about it, and write a headline that supports the thesis they believe in.

An example of this is the "red meat is bad for you" hypothesis. People do studies in which they lump red meat along with luncheon meats and hot dogs, both of the latter usually packed with carbohydrate fillers, sugar, and chemicals to keep them fresh. They find people who eat any of these three things don't do well on some outcome, let's say heart disease, so they then write stories with headlines that say, "Red meat causes heart disease."

But what if the luncheon meats and hot dogs cause heart disease -- or, more accurately are related to heart disease, as many of the studies only show a relation between two factors, not causation -- and red meat does not? By lumping foods together in groups, one has no idea which of these foods is actually responsible for the effect they found.

Anyone actually taking the trouble to read the original research paper should be able to figure this out. But how many people do that? Very few. Most will rely on the science reporters to do an unbiased job.

But they don't. They choose the outcome they believe in and trumpet that.

To be fair to the reporters, I'll add that I suspect they're under a great deal of time pressure. I worked at a newspaper for 8 years, and I know what it's like to try to write a complex news story when the clock is ticking. The science reporters in this case are trying to digest extremely complex research reports and translate them into terms the general public can understand.

The scientists, on the other hand, and especially the public relations officers at the institutions where the scientists work, are trying to put a "sensational" spin on the results to make them sound more important than they really are, hoping that this will help them get more funding to do more research.

What we as patients have to do is to try to extract the truth from all this spin. It's sometimes difficult and takes a lot of time.

The physicians who treat us are also very busy people, and they too -- even those who can remember the statistics they studied years ago, if at all -- don’t have time to pour through science magazines checking to make sure the statistics are accurate.

Sometimes they’re not.

I copy edit articles for a science journal and I’m constantly amazed at the number of careless errors I find in the manuscripts that authors with advanced degrees have submitted for publication. One author using advanced statistics wasn’t distinguishing between
average and median, a very basic difference that every statistics newbie should be familiar with. Other papers give different numbers in the text and in the tables they supply to support the text. I'm sure many of these problems get through the editors and appear in print.

So our physicians also have to rely on headlines that they see in the medical magazines they read, and these headlines too may be misleading.

Just remember that you can't trust the headlines. TV sound bites and newspaper headlines are the least reliable. TV news has to be short and interesting. Newspaper headlines have to give a message in a limited amount of space. But even science news stories and journal article can have misleading titles.

If you see a headline that sounds interesting, read the story carefully. If it seems like something that will be important for you, see if you can get access to the original journal article. The abstracts of such articles are usually free. You may have to pay or wait 6 months or a year to read the full text. Or you can see if the article is at a local hospital or academic library.

You can also look around on the Internet, putting the title of the article into your favorite search engine. Perhaps someone else has read the whole thing and written a commentary on it. Or maybe some site has posted a link to the full text.

Put the names of the authors into your search engine. Sometimes scientists write very similar articles for different journals, and some slightly earlier publications may now be available without charge and will give a good indication of the methods that these researchers use in their work.

Otherwise, take any short summary with a grain of salt. It may be true. Or it may not be. Reader beware.

3 comments:

  1. My favourite study debunkers have to be Michael Eades and Jenny of Diabetes Update. Some studies are so appallingly savaged by so many people on so many blogs that one wonders why they ever got through Peer Review in the first place. Probably because the conclusions support current dogma even though the actual data may suggest something different.

    Also worth checking who financed the study and who bankrolls the researchers.

    ReplyDelete
  2. Gretchen, I believe I have recommended this book to you before, but I do hope you will consider reading WORRIED SICK and/or THE LAST WELL PERSON (both probably at your public library). That and the book OVERDOSED AMERICA are concerned with the misuse of statistics when it comes to medical studies, particularly ones championing the use of various drugs. These books are real eye-openers. The authors are MDs, not some "activist" looking to make money from the latest scare.

    ReplyDelete
  3. Anonymous, thanks for the suggestion. I have a pile of books and research papers about 6 feet high waiting to be read and I'm already aware of the misuse of statistics, so this one won't be a big priority. But I'll keep it in mind.

    According to an Amazon review, "He mentions that increase blood glucose level is an expected part of aging, and the effort to regulate blood sugar with medication has shown no effect in terms of preventing damage to the eyes or kidneys or preventing heart disease or stroke."

    This is simply not true, and it casts doubt on the reliability of other information.

    ReplyDelete