A report I ran across yesterday from Thomson Reuters on organic food made me think of the old expression “lies, damn lies and statistics”. The series of monthly polls asks Americans about healthcare and is reported by National Public Radio (NPR), a partner in the initiative. The survey released in June 2011 covers attitudes to organic food.
So far, so good…but once you get beyond the executive summary and look at the data section, the validity of the report is undermined.
“Facts for Healthcare”
The poll is part of a broader initiative from Thomson Reuters called “Facts for Healthcare“. Its aspirations are worthy:
With so many numbers being quoted across the industry and in the U. S. health reform debates, it can be difficult to separate fact from fiction. At Thomson Reuters, we feel that having an objective and comprehensive understanding of the savings potential will be beneficial to all who are concerned about improving the healthcare system.
I have myself debated the position that decisions about healthcare and the relative roles of private and public bodies should be evidence-based and not based on dogma. For instance, when people (like my dad) claim that the US currently has the best healthcare system in the world, I trot out OECD statistics showing that with the highest expenditure per capita, the US has one of the worst records for infant mortality. I am agnostic about how we do it, but I really think that outcome measures like infant mortality should be the basis for evaluating success or failure.
Just creating more noise?
What bothers me in the Thomson Reuters report is not its objectives, nor its results. What bothers me is that, by their own admission, the results are not statistically significant. So why bother releasing the results?
At the top of the section on “survey data” a line specifies that results formatted in red are statistically significant. Out of 2.5 pages of data, one complete line and two other isolated percentages are red. So unless there has been a typo or a formatting error, that means that virtually none of the results are statistically significant. You would think that would be worth a caveat in the executive summary, the only part most people will read. The NPR reporting doesn’t mention the lack of statistical validity either.
This raises the cirtical question of the misuse of statistics to grab headlines. Research has already shown that few people are numerate enough to put numbers and statistics in context and to analyze the results. In my opinion, this sort of behaviour on the part of Thomson Reuters — by which I mean releasing a report that is not statistically valid and not highlighting that fact — only contributes to the problem, is antithetical to the stated of objectives of the “Facts for Healthcare” initiative and as such, is unethical.
What do you think?