How To Select useful Health News (Part 2)

This entry is the second post in a series designed to help those who want to filter health news and keep only the cream of the crop. In a previous post, I offered a simplified screening tool that eliminate the communications that are useless or downright harmful.

Now that we have collected health news which content could prove useful to improve our health, how do we find the golden nuggets? Is there a diamond in the coal and if so, how can we locate it without being a geologist?

1) What is under study: Animals or humans? I’ve lost count of the occasions where I scanned a promising title only to realize the study was performed on animals. One can’t use such a piece of news for two important reasons. First, animals are used as models, and models are just that; models! Animals also react very differently than us in numerous ways. For example, no responsible dog owner would give chocolate to Fido. Now, try to pry it away from any aficionado of the real dark gold. Most birds can eat plants loaded with strychnine; dear human, please do not try this at home, or anywhere else for that matter. The stuff is highly toxic to us.

Second, the use of animals indicates the research is at very early stages. The failure rate will be very high. What looks at first very promising very often turn out to be a bust. This natural substance, drug or protocol being tested may yield good results in animals, but there is absolutely no guarantee it’ll be likewise in humans.

Bottom Line: Stick with research done with human subjects.

2) How many participants? The more, the merrier! When the number is small, it is very easy to reach a “statistically significant” result, a phenomenon known as “p-hacking.” Reaching significance is an absolute prerequisite to get published. In scientific research, you do not survive for long if you can’t publish. Let’s just say the incentives to “p-hacking” for significance are powerful…

There is a notable exception to this rule: controlled nutrition studies. In this case, participants live for a time in a controlled environment where everything is measured and observed; sleep, diet, activity, metabolic rate, etc. For obvious reasons, it is next to impossible to enroll many volunteers. The high cost acts as a huge deterrent. Researchers must compensate for this handicap with great design, high quality, and rigor in the execution.

Bottom Line: Eschew news reporting studies with a small number of participants.

3) Results: Correlation or causation? Correlation is the degree to which measurements tend to vary together. For instance, a direct correlation is the number of cars versus the size of a city. The bigger the city, the more cars there will be in the area. An intuitive example of inverse correlation is the probability of keeping something secret decreases as the number of persons knows about it.

It is imperative to understand that correlation is NOT causation! A classic example is a correlation between two cities A and B: City A has 10,000 residents, ten temples of worship and ten brothels whereas City B has 100,00 residents, 100 temples of worship and 100 brothels. It would be entirely wrong to conclude the presence of temples of worship leads to the establishment of brothels when a much more logical explanation is the difference is population size.

Bottom Line: As tempting as it may be to jump on a seductive rationale, always be mindful that a correlation does not explain the observations.

This checklist is a very simplified screening tool anyone can easily use. In a third post, I will refine the checklist, answer the questions left in the comments and provide trusted resources that explain in plain English, the real significance of high impact health news.

Print Friendly, PDF & Email

One Response

  1. Christina Sunday, 22 November, 2015