As several friends have noticed, I'm still alive. In fact there were no fatalities as a result of the mass homeopathic overdose last weekend, to the annoyance of some of the more vocal critics of the 10:23 campaign. Homeopathy organisations have been trying to respond, often finding amusing and creative ways to dig themselves deeper into a hole, as the New Zealand Council of Homeopaths did when it issued a press release admitting that their remedies contain no "material substances".
None has dug harder or faster than the British Homeopathic Association, which must now face some very serious questions about its misrepresentation of evidence to MPs, and to the public. Angry scientists are asking why studies they published that did not find in favour of homeopathy have been presented as if they had.
The BHA posts lists of studies on its website claiming to provide evidence for homeopathy, and it submitted a review of the evidence, written by homeopath Robert Mathie, to the Science and Technology Select Committee "Evidence Check" on homeopathy in November.
It doesn't take much research to spot serious failings in Mathie's scholarship. For example, the BHA's submission starts by detailing five systematic reviews of homeopathy in general, four of which it claims "have reached the qualified conclusion that homeopathy differs from placebo".
I spoke to Jean-Pierre Boissel, an author on two of the four papers cited (Boissel et al and Cucherat et al), who was surprised at the way his work had been interpreted. "My review did not reach the conclusion 'that homeopathy differs from placebo'," he said, pointing out that what he and his colleagues actually found was evidence of considerable bias in results, with higher quality trials producing results less favourable to homeopathy.
The third of the four papers, Kleijnen et al, concluded that the data were "not sufficient to draw definitive conclusions". The fourth, published in 1997 by Linde et al, was updated two years later, and yet the update – which was more critical of homeopathy – was not cited.
Boissel pointed out an even more surprising error: that the two papers he was involved in were actually describing the same analysis. In other words, Mathie managed to take one study that the author emphatically maintains didn't support homeopathy, and present it as two studies that did. I asked Boissel whether he felt comfortable that his work was being presented to the public as evidence in favour of homeopathy. His response was simple: "Definitively no!"
The BHA's other evidence is also riddled with errors. Edzard Ernst, the author of a meta-study cited in favour of homeopathy, complained to me that, "they omitted the important caveats from our conclusions and therefore were grossly misleading in the interpretation of our data."
A review by Jonas et al concludes that there are "too few studies to make definitive conclusions about the efficacy of any one type of homeopathic treatment on any one condition" yet is cited by the BHA as providing evidence that homeopathy is effective in the treatment of rheumatic diseases.
Many of the dozen or so meta-studies cited by the BHA as being favourable to homeopathy simply aren't. In some cases, the BHA itself seems confused as to whether a particular paper supports homeopathy. A Cochrane review of homeopathy and influenza is cited on its website as evidence in favour, but was presented to the select committee as "inconclusive". Its actual conclusion is that, "Current evidence does not support a preventative effect of Oscillococcinum-like homeopathic medicines in influenza and influenza-like syndromes."
The BHA's confusion isn't limited to clinical evidence. It gives conflicting answers to simple questions, the most profound example concerning "individualisation" – the idea that homeopathic remedies must be tailored to individual patients to be effective. In a statement issued to me last week and posted on its website, the BHA talks of "the many difficulties encountered squeezing a holistic and individualised treatment into a strictly controlled trial methodology." This is a common rationale used by homeopaths for the frequent failures of their remedies when subjected to scientific scrutiny.
But Boots own-brand homeopathic remedies are mass-produced in factories. They are about as individualised as a Big Mac. Surely then, by the BHA's own logic, Boots' products are not up to scratch? I put this question to Cristal Sumners, the BHA's spokeswoman, but she ignored it in her response. I tried again, but so far I have failed to get the BHA to tell me whether individualised homeopathic treatments are actually better than the non-individualised treatments sold by companies like Boots.
It's not just consumers and scientists who are angry with the BHA. I put my findings to Evan Harris MP, a member of the science and technology select committee who has expressed frustration at the standard of evidence presented by homeopaths. He told me: "The sort of cherry-picking and misrepresentation, not only of papers but of systematic reviews of papers that the BHA has happily engaged in here, seems designed to undermine evidence-based policy-making. It's right that it should be exposed and deprecated."
The BHA's approach to evidence is perhaps inevitable given the multi-billion dollar homeopathic industry's unwillingness to fund high-quality research. The starkest example of this can be found in the accounts of manufacturers. Pharmaceutical companies in general may spend around twice as much on marketing as research, but accounts for a homeopathic pill-maker called Boiron reveal a marketing spend of more than €108m compared with a research budget of just €6.5m – a ratio of 16 to 1.
I'm no Alan Sugar, but maybe if homeopathic manufacturers put some cash into proving their pills worked they'd be easier to market.
The BHA should apologise for the poor standard of scholarship in its submission to the select committee – to MPs, the public and the scientists who feel their work has been misrepresented. The rest of the homeopathic community should put up or shut up. Dodgy dossiers and slick marketing are no substitute for good, clear research.
Martin Robbins writes for The Lay Scientist