Rational Veterinary Medicine

Homeopaths love evidence, they throw it around by the ton, chucking papers and citations into discussions like confetti. There is, we are told, ample evidence that homeopathy is effective, you just need to look. When anyone does look however, most, if not all of the evidence is found wanting. In particular any reference coming from within the pages of one of the many pro-homeopathic journals available is likely to be dismissed without further consideration.

This may seem unfair; a blanket dismissal of evidence simply because of its source; after all many of these journals are claimed to be peer-reviewed, just like mainstream ones. So why do serious commentators feel justified in ignoring them? There are many reasons but mostly it comes down to two - publication bias and a lack of scientific credibility.

Publication bias:

Publication bias is the phenomenon where journals will be more likely to publish certain types of papers than others. In particular, journals tend to publish positive results rather than negative or inconclusive ones. It's human nature after all - no-one wants to read that in a trial costing millions, hundreds of patients were given a treatment for months and months and we're still not sure whether it works or not!

The reason for publication bias is often that a researcher simply won't submit negative or inconclusive studies for publication. Also, cynics might say it's good business - if a journal receives portions of its income from that well known drug manufacturer MegaPharm © inc. then just maybe it might be more likely to publish papers which suggest that MegaPharm © inc.'s products were really good (though stricly speaking this is funding bias).

Publication bias is well recognised as a universal phenomenon. The reason homeopathic journals in particular suffer such a credibility problem is the degree of publication bias in such journals is so enormous. According to Schmidt et al (2001a) in 1995 a staggering 99% of papers published in complementary and alternative medicine (CAM) journals were positive. Six years later this dropped to 95% but that is still an astonishingly low rate of negative or inconclusive results, unheard of in conventional journals; the authors describe it as 'minute' in a letter to the BMJ in the same year (Schmidt et al 2001b). Furthermore, while mainstream journals are taking steps to reduce publication bias by refusing to publish papers which weren't registered on an electronic database prior to commencement, there are no signs that alternative medicine journals are following suit.

The type of papers seen in pro-CAM journals is also changing in a way which almost guarantees positive results with Schmidt et al (2001a) reporting that the number of clinical trials in such publications fell by 4% between 1995 and 2000 while the number of surveys (which are far less reliable as evidence than clinical trials) increased by a staggering six times.

Homeopaths themselves appear unclear about the nature of publication bias in the reporting of clinical trials. In this small study (Caulfield and DeBow, 2005) the authors complain about the “harsh” language used in mainstream journals about homeopathy and wrongly claim that this led to “publication bias”, supposedly demonstrated by their finding that 65% of trials of homeopathy published in mainstream journals were negative compared with only 30% negative in pro-CAM journals. In fact, despite the authors' disingenuous suggestion, publication bias works the other way from how they claim, it's not that proper journals are prejudiced against homeopathy, the problem is that pro-CAM journals are too uncritical when it comes to publishing papers. According to Gimpy's blog even the basis for this alleged prejudice is false, based as it is on claims of "harsh language" which were taken out of context or simply mis-quoted, indicating how low a regard for accuracy the average homeopathic researcher has, so long as the conclusion gives them what they want to hear.

In addition, their lack of understanding of the scientific process led the authors to suggest that the sceptical tone in the introductions of many of the studies was inappropriate - they seem to have forgotten that scientific investigations are always sceptical by nature, something which again, is very revealing of the attitude of the homeopath to research.

Homeopathic research:

And this brings me on to the second point; homeopathic attitudes to research. Anyone used to reading scientific papers on any subject, particularly those who have submitted papers themselves for publication will know what an unsentimental, cut and thrust process it is. Ideas, methods, statistics and conclusions are ruthlessly pulled apart by editors and peer reviewers to start with and then, if the paper is finally accepted for publication, after the 10th re-write or so, it will be subject to the same ruthless criticism by readers as every aspect is once again scrutinised and criticised. This, in essence is the scientific method, this trial by fire is how evidence is tempered and proved and only after which the findings may or may not be accepted as valid.

When this process is applied to research carried out by homeopaths it is a different story altogether. Instead of entering into a rational scientific debate homeopathic researchers react with great indignation to any criticism and seem most perplexed that any paper isn't accepted at face value. The problem is that every homeopath is so convinced by their own personal experience of practice they already 'know' in advance that homeopathy works and any research is done simply to confirm this truth to unbelievers and sceptics (for more details and references on this see this article). Critics of homeopathic trials are accused of prejudice, political motives, sexism, imperialism even and being too wrapped up in outdated mindsets and unfamiliar with quantum processes (which have absolutely nothing to do with homeopathy). We are told that conventional trials are inadequate to test homeopathy and new forms of testing such as "pragmatic trials", "observational studies" (Shekelle et al, 2005) and "dual blind" trials (Caspi and Millen, 2000) should be employed, all of which are simply ways of lowering the bar in order for homeopathy to generate more duff trials which appear to give positive results. That way there is more bogus information to bombard people with, in the knowledge that it doesn't really matter, most people are trusting and will accept what they are told at face value; most people don't actually read references, much less know how to look out for flaws and pitfalls.

Conclusion:

To recap, the reason we can't accept trials published in pro-CAM journals at face value is bias, associated with a lack of scepticism and scientific rigour. Whereas the purpose of a true scientific trial is to test a hypothesis, the purpose of a homeopathic trial, when conducted by homeopaths, is to prove a hypothesis (that homeopathy is effective). The difference is subtle but extremely significant. What it means is that if a scientist's theory turns out not to be supported by the evidence they will reject or at least reformulate that theory but a homeopath in the same situation will argue strongly that the trial itself is flawed as it hasn't confirmed what the homeopath knows to be true.

So I make no apologies at all in roundly dismissing virtually all evidence from what have been described as 'trade journals' for homeopathy, whose peer reviewers are all homeopaths themselves and have a vested interest in maintaining the illusion that there is something in it (if there wasn't they'd be out of a job!). If I wanted good, objective information about MegaPharm © inc.'s latest wonder drug I most certainly wouldn't be reading the MegaPharm © inc.Monthly Gazette; neither would I read any pro-CAM journal for reliable, impartial information about a medical modality which says it can cure cancer with water and sugar tablets.


References:

Caulfield, T., and DeBow, S., (2005) A systematic review of how homeopathy is represented in conventional and CAM peer reviewed journals BMC Complementary and Alternative Medicine Vol. 5 no. 12 [internal link]

Schmidt, K., Pittler, M.H., Ernst, E., (2001a) A profile of journals of complementary and alternative medicine Swiss Med Weekly Vol. 131 pp. 588-591 [Internal link]

Schmidt, K., Pittler, M.H., Ernst, E., (2001b) Bias in alternative medicine is still rife but is diminishing British Medical Journal Vol. 323 no. 7320 p. 1071 [Internal link]

Shekelle, P.G., Morton, S.C., Suttorp, M.J., Buscemi, N., Friesen, C., (2005) Challenges in Systematic Reviews of Complementary and Alternative Medicine Topics Annals of Internal Medicine Vol. 142 no. 12, Part 2 pp 1042-1047 [Internal link]

Why the contents of CAM journals cannot be accepted as reliable evidence