The Evidence

Rational Veterinary Medicine:

On Sale now!

Share on Facebook Share on Twitter Share via e-mail Print Share on Tumblr Share on Stumble Upon Share on Reddit
Share on Facebook Share on Twitter Share via e-mail Print Share on Stumble Upon Share on Reddit

To paraphrase George Orwell “Some evidence is more equal than others”. Where evidence is concerned there is a distinct heirarchy with weak forms such as case reports and anecdotes (stories) at the bottom and randomised double blind placebo controlled trials (RDBPCTs) and metanalyses at the top, regarded as the most robust.

There is  also the matter of where the evidence is published. A survey about alleged vaccine reactions published in “Fave Celebs” magazine is far less trustworthy than a paper on the same subject published in the British Medical Journal. The best journals operate a process of “Peer Review” where any paper which is submitted for publication is looked at by specialists in the field under study and also by statisticians to make sure the maths has been done properly and the conclusions are justified.

“That's great”, you may be thinking, “very simple, now all I need to do is look at peer reviewed DBPCTs and ignore everything else”. Well, unfortunately not, like so much else in life, it’s not that simple. In fact the act of looking at evidence has become a branch of science in its own right these days (see references below) and there are distinct dos and don'ts when it comes to reading and interpreting papers correctly.

For instance, a trial can claim to be blinded but if the authors don't explain the methods used to achieve this you would be right to be sceptical, and comparing two treatments is going to give less reliable results if the patients being studied know whether they're getting the real treatment or not. The same for randomization - tossing a coin or assigning people to alternate groups depending on which order they arrive at the laboratory is sort of randomising but it's not very good and again, if there was any chance that the experimenter might have been able to influence which groups volunteers went into, the results would be that bit less reliable. Placebos too are not always perfect, if you are testing a small homeopathic sugar pill you're not going to use a large, bitter tasting capsule as a blank control to compare with. So if the authors don't state what they used as a control, again you'd be right to be sceptical that possibly there was some subtle difference in taste, smell etc which might have alerted the participants.

Peer review also isn't always everything it might be. Take homeopathic journals for instance. Many will claim to be peer reviewed but on closer inspection it turns out that it is other homeopaths who are doing the reviewing, not necessarily experts in the condition being studied or statisticians or people with an expertise in trial design. There would have to be a strong suspicion of publication bias if the people doing the reviewing are already convinced that homeopathy is a valid method of treatment and in fact this is supported by the evidence when such journals are analysed.

Even once a paper is published there are ways of presenting it which can make it look as if something has been shown when it hasn't. Homeopaths and proponents of CAM and other pseudoscience such as the raw feeding of pets are particularly adept at this. They will selectively quote from parts of a paper while ignoring other, more significant portions thus “spinning” the actual conclusions. Trials which conclude that homeopathy is ineffective will be either ignored or attacked on bogus grounds such as being unhomeopathic while at the same time papers which appear to the outsider as equally 'unhomeopathic', breaking multiple of the so called 'laws of homeopathy' are accepted, so long as the findings also support the idea that homeopathy works.

This is known as 'cherry picking' - taking information out of context or putting forward badly designed trials simply because they appear to support preconceptions and it is the scourge of the scientific investigation of CAM.

So, here are a few general points to bear in mind when considering evidence about CAVM, which homeopaths and others with similar, single agenda outlooks usually choose to forget:

1/ The purpose of a trial is to test a theory, not to prove it. The scientific process at its best is a way of impartially testing theories while correcting for the problems of subjective assessment and cognitive bias that we are all prone to, particularly if we feel strongly about the outcome. Researchers such as homeopaths who know what outcome they want before they start and are already convinced that homeopathy is effective do not make the most impartial investigators of homeopathy.

2/ The above notwithstanding, the RDBPCT is not always the be all and end all when it comes to testing drugs. Despite what is often claimed by detractors of real medicine some drugs have an effect that is so obvious that there is no need for such a trial. To quote veterinary surgeon Dr Morag Kerr 'the RDBPCT isn't by any means the "gold standard" test to be applied to every therapeutic intervention. It's quite pointless and indeed highly unethical to go through such an exercise for a self-evidently efficacious treatment. Imagine such a trial of insulin, for example! None has ever been done, nor will it be... No RDBPCT was done on trilostane, yet it was very quickly licensed. In fact the RDBPCT is the final arbiter of the NOT self-evident effect, where there is real doubt whether there is anything there at all. Unfortunately, nearly all "alternative" medicine is in that category, and certainly all of homoeopathy - if it were not, it wouldn't be "alternative"'.

CAM proponents are very quick to point out that some drugs and other treatments have never had a RDBPCT performed on them, but without mentioning the reason why - their action is so obvious that there is simply no need. Those who claim that most real medical interventions have 'never been tested' are simply wrong, as discussed by the late Bob Imrie (Imrie 2000) and Edzard Ernst (Ernst 2004).

3/ Homeopathy most certainly does NOT use "small doses of various substances" (as claimed here for example - Jonas, 2003) to stimulate healing or anything else. The truth is that there is literally nothing whatsoever in homeopathic remedies - no active ingredients, just plain sugar, water or alcohol. Any original ingredient in all but a very few of the weakest remedies has been diluted out of existence until, literally, not a single molecule is left in the final product. Anyone who claims otherwise is either being economical with the truth or knows so little about the subject as to not be worth wasting time with. This means when a homeopathic remedy is compared with a placebo the comparison is actually between two placebos!

All this of course is like water off the proverbial duck's back to CAM believers. They're not interest in robust methodologies and protocols or detailed analysis of statistics, they are intersted purely and simply in results. It doesn’t matter how dreadful a trial is or even if it’s just an open questionnaire, by their definition a good trial is simply one which gives results they want. while anything which contradicts their world view is to be roundly condemned.

Now, however you know enough to disabuse them of this misconception, and when told there are hundreds of trials which “prove” that homeopathy works you can nod politely in agreement before pointing out that, while strictly correct, this statement is somewhat marred by the fact that virtually none of them are worth the paper they're printed on!

Click on the links below if you’d like to read more on the subject of how to interpret a scientific study and remember - some trials are better than others, don’t just accept the claims of self interested CAM proponents. As I have discovered some people will exaggerate or even lie if it suits their purpose - READ THE REFERENCES!


Further reading:

Evidence based medicine websites:

Bandolier - complementary and alternative medicine

Bandolier - a guide to bias, [pdf version]   

The National Institute for Clinical Excellence    

Center for Evidence Based Medicine

Novella (2007) How Much Modern Medicine is Evidence-Based in Neurologia blog

How to interpret papers:

Greenhalgh, 2010   

Evans 2011   

Shady research is rampant:

ClinicalPsychology.net   

How to look at Evidence (an incredibly brief introduction)