Last month I wrote a column describing an academic paper on which I was an author. We took every top 10 UK newspaper, for a week, found every dietary health claim, and then graded the evidence using standard grading tools. We found 111 claims in 37 articles, and overall about 70% were only supported by the two weakest forms of scientific evidence or none at all.
On Monday, James Randerson, the environment and science news editor of the Guardian, posted a critique of it ("Ben Goldacre's study of dietary news should be taken with a pinch of salt").
The majority of his 2,400 word piece is spent criticising us for a position we do not hold (and that we told him we do not hold before he posted his piece).
His is a very long piece, so I have broken his objections down into headings. As a summary, this is what he said:
• We shouldn't apply evidence grading systems, which were designed only to assess health advice, to all science and environment stories (but we didn't, and we think that would be a silly idea!).
• Claims with weak evidence were often presented with caveats (that's an interesting idea for an extra study, though his examples of caveats don't seem like caveats)
• A major news event in the week of the study might affect the proportion of weak-evidence claims (we agree, we discuss this along with other weaknesses in the academic paper: it's another extra study, but it's worth noting the maths on how big the effect would have to be to impact on the results).
Further to this, James attempts to read patterns into the figures on individual newspapers in our study, although when the numbers are split down into such small subgroups, the best explanation for variation is probably random variation.
All research is done with limited resources, and therefore methodological limitations, which are freely and openly discussed. I believe, as ever, that discussing the strengths and weaknesses of a specific study design is the absolute best way to understand science: so I'm very happy to go though each of James's arguments in more detail, and I also hope, for that same reason, that this post is interesting on its own merits.
Lastly, I should be clear, although in my column I use problems in science as a gimmick, if you like, to explain how science works, I do also believe that the public being given misleading health advice by the media is a very serious issue, and one that deserves serious attention and investigation. In terms of the debate on this specific issue, my concern is that the strength of James's criticisms have been overstated, and may be used by people to muddy the waters, to pretend that there is no useful data on the scale of this problem, and to belittle what is a very serious public health issue.
Should we use health advice grading systems for all science and environment stories?
Of course not.
Our study was the first in the UK to examine the quality of evidence for a systematic sample of every nutritional assertion in a week's worth of national newspapers. We wanted a simple and well-delineated issue, so we chose to take every piece of dietary health advice in a one-week period (111 claims in 37 articles). The advantage of this is that analysing the strength of evidence is more straightforward: it's easier to see what should be included or excluded, and you can use a simple evidence grading tool such as the WCRF and SIGN grading systems, designed specifically to grade the quality of evidence for a piece of advice on a health intervention.
James says "the grading systems for 'reliability' of evidence that the authors employ are not sophisticated enough to be much use" for journalists deciding whether every science and environment story they come across is newsworthy. Of course they're not: they're specifically designed to examine the quality of evidence for advice on health interventions. James's killer example is the government's chief scientist giving a lecture: "Food, water and energy shortages will unleash public unrest and international conflict, Professor John Beddington will tell a conference tomorrow."
He says this would rank low in the WCRF and SIGN grading systems. He's wrong: those grading systems couldn't rate this article at all, because there's no advice about a health intervention. It's a completely inappropriate tool to use. It's obvious that a grading system for assessing the quality of evidence for health advice will be entirely unhelpful here. It would be entirely stupid to use the WCRF grading system on such a story. Nobody has suggested doing so. Nobody would do so. If they tried they'd fail.
I think it's odd that he insists this is our position when it's not, and when we told him it's not. He explains in his piece that we would want to see the examples he gives banished from newspapers. We would want no such thing.
James says that by insisting on applying the WCRF and SIGN grading systems to all science and environment stories we "demand a standard of evidence for writing about science that is self-defeatingly high", that this "would exclude almost all science from newspapers". It's all very odd. We just don't insist on that at all. We think it would be silly.
What we did was very simple: we assessed the quality of evidence for every one of 111 dietary health claims in one week of newspapers. We found that overall, the quality of evidence for dietary advice was poor, and that this might lead to the public being misled, overall, routinely, by what they read in papers. We think it would be better if heath advice in newspapers was generally based on stronger forms of evidence, but of course there will be times, even for the very specific issue of dietary advice, where there will be reasons to write stories on weaker forms of evidence. However, since about 70% of the advice given had the lowest two forms of evidence, there might be a matter of scale here.
Caveats
James argues that dietary advice with weak evidence can be presented with caveats, and that this makes our coding system unfair. He says that we "miss some very important context that is present in the articles and which, I believe, gives readers a chance to judge the quality of the evidence for themselves".
This is a very interesting hypothesis – that claims backed only by weak evidence are often presented in newspapers with a clear caveat to warn the reader. It's not my face-value impression, but that doesn't matter, what matters is whether somebody can do a study to examine whether claims with weak evidence were presented with caveats, explaining the weakness of the evidence.
This would pose some interesting methodological problems, because "caveat" is hard to measure reliably (and also likely to be absent from the most-read part of newspaper articles, namely the headline).
It would be great if caveats were measured in future studies on the topic, and if they are, then as a starting thought, I would suggest that various specific aspects of them should be measured, including the presence, strength, positioning, and frequency of caveats. It would also be worth examining, in parallel, how readers interpret caveats, and whether they are "heard".
So the issue of caveats is an interesting one.
However I would question whether what James has found, and presents in his critique as journalistic caveats that we have ignored, really are caveats, that clearly explain the weakness of evidence to readers.
It is risky to pull out one data point from a paper like this, but James presents this, as his best example of a claim with a caveat: "There is some evidence to support taking the herbal remedy echinacea, but preparations vary so it is hard to tell what you are getting."
The evidence supporting this advice is the third strongest category out of four. James says: "To my reading, Nolan expresses the uncertainly in the evidence around echinacea and hardly offers it a ringing endorsement." Well, you can judge, but I do not see a very strong caveat here to help the reader. I can see the word "some", but after that, well, doubt about the quality of the preparations of echinacea is not a caveat about the effectiveness of the intervention.
James then describes this, from the Guardian, as "again, a very contextualised response from a GP that ends up in the second lowest evidence category." Reading it through (I've pasted it below) I don't think this is a "very contextualised" piece that flags up the weakness of the evidence for the assertion it makes. In fact, it seems to me that this paragraph makes a series of very specific and confident assertions about the evidence, to the extent of specifying the precise amount of chocolate you should take – in grammes – to lower your risk of heart attack.
"A recent Italian study linked the combination of Italian food and dark chocolate with lower levels of a protein in the blood related to inflammation – C-reactive protein (CRP). Basically, the lower your CRP, the lower your risk of heart attack. There's one snag: the lowest risk is at a level of 20g every three days; below and above this level the risk rises. So eat chocolate, by all means, but make it dark, and don't overdo it. The fact that you're not overweight should in theory help to lower your risk further"
I think you would have to bend over backwards to view this as a "very contextualised" response, rich with caveats that have been unfairly ignored.
As I said, I think the issue of caveats is very interesting, but hard to code, a good topic for a future paper, but even as he bent over backwards to find fault, I don't see that the examples James has found represent strong caveats. There may well be better ones, I don't know.
'Obama will have made it an unusual week'
James asserts that because the week of newspapers that was chosen – at random – contained Obama's election, it would be an unrepresentative week.
The fact of it being a busy news week is discussed in the academic paper itself (there are lots of weaknesses, under the heading "weaknesses", as in any academic paper). To increase the proportion of weak-evidence claims you would have to believe that a big news story would selectively push only well-evidenced health claims from the newspaper, leaving the weak ones behind.
It is always very risky to try concocting explanations after the fact for patterns you've observed in this kind of study. Many might think, before seeing the data, that it would have either the opposite effect to what James proposes (big stories push out the silly ones), or no affect on the relative proportions between strong and weak evidence claims. James's suggestion that a big news story selectively excludes strong-evidence health claims is an interesting hypothesis, which someone could assess in a future piece of research.
But bear in mind you would have to propose a very strong effect. We know that for the effect he proposes, the high-quality articles must have been displaced while the low-quality ones tended to remain. If we were to decide that, for the sake of argument, it is ok for 30% of nutritional health claims to be poorly supported by evidence, then to shrink our ~70% figure to ~30%, then Obama must have displaced three quarters of the high-quality articles and none of the low quality ones. This may be true, but it's a very big selective effect. A very large sample would be required to find out.
It is also perhaps worth noting, at this stage, that I'm not aware of many numbers or studies receiving 2,400 words of close methodological appraisal comparable to what James has given this one. I would absolutely welcome that becoming more common throughout the media.
Reading patterns into the smaller numbers on individual newspapers
Next, it's notable that James attempts to read patterns into how many articles or claims there were in specific individual newspapers. This kind of small subgroup analysis is generally regarded as extremely unwise, for the following reasons. 111 claims, in 37 articles, is large enough to give a summary figure, but when those 111 claims and 37 articles are split ten ways among ten newspapers, the numbers are so small that the best explanation for variation between newspapers is random chance. We explained this in our paper: we don't think the numbers are big enough to draw conclusions about the number of stories in any individual newspaper, or the quality of evidence for the claims in any one specific newspaper.
We allowed ourselves to compare broadsheets against tabloids, as the numbers were still fairly large with that split, and we found a modest difference. Using the WCRF criteria, 67% of broadsheet health claims were from the weakest two categories of evidence, and 74% in tabloids (p=0.02 for those who are interested), so the difference wasn't very dramatic.
James insists on drawing conclusions from the number of claims from individual newspapers in that one week, and tries to explain the patterns he believes he has seen. I don't think that's valid. We explained why this is unwise. If James has an explanation of why random chance is not the best explanation for those patterns, at those tiny numbers for individual subgroups and newspapers, then he should say so.
You might be tempted to join him, and try to see patterns in the noise (really, the riskiness of this is something I've covered in the column many times). You might want to say, for example, after you've seen the results, that it's striking there were fewer health claims in The Times than some other newspapers. That might make sense to you. Well, it might be a true finding, it still might be chance (and also, remember, this was the first time anyone took a one-week sample and counted them all up). Does it make sense to you that the Mail did fairly well on quality of evidence? Probably not, I suspect. Does that change your mind about cherry picking individual newspapers, now that a result goes against your preconceptions? It shouldn't: it's all probably noise, you just shouldn't do it!
The "Goldacre criteria", and paper
I've no interest in a personal squabble (from journalists, you can imagine, I get plenty of those invitations). I should perhaps say that I've barely met James. But I do think it's quite odd that he refers repeatedly to the "Goldacre/Sanders study" (and the "Sanders/Goldacre study"). I'm extremely happy to be associated with the research, and am more than happy to discuss its strengths and weaknesses – it was an interesting first stab at a hugely important problem – but I wouldn't dare to take the name or the credit. Academics would refer to it as Cooper et al, because Ben Cooper is the first named author, he worked extremely hard on it, and Ben Cooper is the corresponding author. It is the Cooper Study, and I couldn't not clarify this, as it was an odd and repeated turn of phrase. Similarly they are not the "Goldacre/Sanders criteria". They're the WCRF and SIGN criteria. Calling them the "Goldacre/Sanders criteria" seems very odd.
It's not a criticism raised by James, but I should also say, I don't think I'm necessarily the best person to write about this academic paper, since I was an author on it. Of course that was more than clear in the piece. However, nobody else wrote about it – it had been out for a couple of months when I did – in the same way that other journalists sadly seem not to write about the various other problems in the media that I occasionally cover (and it really is occasional, about one a month). Nobody else has written about this study in any other newspaper: fair enough!
Summary, and thoughts on improving research
It's great to see people engaging with the serious issue of the media misleading the public on health advice, since despite major concerns, there has been almost no quantitative research on this in the UK. In the US there has been a lot more work, far bigger, and far better than our first start (a good place to start is Gary Schwitzer's publications here). This research finds widespread problems and shortcomings in the information given to the public through mainstream media, as anyone would expect, although they analyse slightly different types of health claims. A 2008 research paper said: "in our evaluation of 500 US health news stories over 22 months, between 62%–77% of stories failed to adequately address costs, harms, benefits, the quality of the evidence, and the existence of other options when covering health care products and procedures." You can find similar studies in Canada and Australia, with varying methods and results on varying questions, as a start.
Although it may be uncomfortable for people working in the media, this is a legitimate phenomenon to investigate, and to try and document. People make real world decisions based on the information that they receive through the media, and this has very real consequences for their health. If they are being routinely misled, then this is an important and serious public health issue.
On James's concerns, I think his central and lengthiest argument – that we want to use WCRF health advice criteria on all science and environment stories – is plainly absurd. As I say above, the issue of caveats is interesting, though hard to code (and I'm dubious about his examples of caveats). The issue of whether that one week was representative is also interesting. I think it would be unwise to make strong assertions about that after the fact, but it could certainly be investigated in a further study.
This first paper wasn't perfect: all research is done with resource constraints, all research can be improved on, and all research is explicitly presented with limitations. The study is what it is: a systematic sample of all 111 dietary health claims in one week of British newspapers, which is a large enough sample to draw some conclusions. It is entirely legitimate, and actively desirable, to raise issues around the methods and limitations. We discuss many in the paper, and I'm sure there will be some others that we have missed.
It is also useful to explain how far you think the limitations will change the result (will it turn the result entirely upside down?), and to explain how you think those limitations can be worked around.
The phrase "more research is needed" has, famously, been banned from the British Medical Journal: it's a sop, because you should say exactly what kind of further research is needed, and why.
If you believe the sample was too small – which is especially the case if you are keen to do subgroup analyses, on individual newspapers, or individual sections of newspapers, or specific subtypes of journalist – then you could simply replicate our paper with a larger systematic sample. If various different groups did this, with similar methods to each other, then a pooled analysis would be possible (or even blinded double-coding to look for agreement and disagreement). It's quite a lot of work, but hopefully publishable and so worthwhile for, say, a medical student or MSc science communication masters student, looking for a CV point.
If you believe that "busy news weeks" will have a differential impact on the number of strong and weakly evidenced claims, then you might want to find a replicable and valid way of coding "busy news week", and repeat the study in such a way that you sample enough days from either type. Or – again being constructive, and thinking methodologically – you could try and even that issue out with your sampling method, analysing all newspapers on a different day in many different weeks, perhaps, for however many weeks your resources permit.
If you believe that caveats are a big issue, then you could devise a replicable and valid method of coding caveats (discussed above), and perhaps discuss it with qualitative researchers beforehand, as I suspect it will be an interesting and complex issue, before applying that to a systematic sample.
Needless to say, of course, you may want to do your study in a completely different way, or look at a completely different kind of health claim or scientific statement. We chose dietary health claims because we felt it was a good way of reducing ambiguity and arbitrariness about what kinds of stories and claims should be included, and because there are pre-existing grading systems for grading the quality of evidence.
I hope that was reasonably interesting. I think that the quality of health advice given to the public by the media is a very serious and important issue, I hope it will be researched more, and I actively look forward to our current summary figure being superseded by more detailed work.
(Thanks to William Lee and other co-authors for chats and occasional lines while writing this. Sorry if it's scatty: it was written swiftly in between other work).
• James Randerson responds to Ben's article in the comments below.