“He who heals is right.”
Really? This apparent simple phrase, widely accepted as true, is full of pitfalls when looked at more closely.
In fact, doctor and patient only believe they see a “cure” or at least an “improvement”. This can be true, but it can also be just as false. Chronic illnesses in particular, with their ups and downs, may only give the illusion of an improvement. Doctors and patients can also deceive each other. After all, both are under a certain pressure to succeed and may see improvements where there are none. Perhaps the patient does not dare to admit to himself or to the doctor that he is not feeling any better – this may happen more often than one might think at first.
If an improvement has actually occurred, there can in principle be two reasons for this: The improvement is causally due to the treatment, or it merely occurred promptly but independently of the doctor’s actions, usually because the patient has also recovered all by himself. But many other factors can also be responsible for the improvement (behavioural changes, coming to rest, dietary changes, parallel or previous treatments, etc.). One would think that it would not be so difficult to distinguish between them. But that this is one of the greatest challenges of medicine can be seen just by looking at history, which is rich in ordeals that were mistaken for cures for a long time, often for many centuries.
In good faith
Why it is so difficult for us to distinguish between causally related events and those that are merely timely is easy to answer: because the human mind is conditioned to recognise causation. It is obsessed with “putting one and one together”, “using common sense” and so on. The “combination specialist” human being is, as some say, a “credoman”. He seeks an explanation for everything and the simpler it is, the better. This characteristic was certainly one of his trump cards in the game of evolution. It enabled him to tame fire, catch animals and invent tools, and it certainly helps him to cope with everyday life in the 21st century. But credomania also has its dark side: It always gets in the way when things are not so obvious, or – even worse – when they only seem to be obvious. Very often, when two events occur simultaneously or shortly after each other, he blindly falls into the explanation trap. As if out of an inner compulsion, man believes that events that are close in time are also causally connected. They equate coincidence with causality, as the technical term goes. This is named als the “cum hoc ergo propter hoc-fallacy”.
In the meantime, we know that personal experience, appearances and superficial plausibility cannot be relied upon. In his credomania, man bends reality to suit himself, one could say he morphs it into his reality.
The rules of evidence-based medicine
Only scientific studies can rule out such self-deceptions. Only they can clarify whether two events – such as swallowing a pill and the cure or improvement – are really causally connected. That is, that the first event, swallowing the pill, is the cause of the second event, the cure. But be careful: not all studies are the same. For a study to be really meaningful, it has to follow certain rules. These rules are something like precision tools that have revolutionised the art of healing over the past two decades under the name “evidence-based medicine” (EbM):
A procedure, meaning here a medical method or a drug, must be examined in studies with a sufficient number of patients.
In these studies, the procedure must be compared with something else, ideally with an ineffective control procedure (Placebo, as far as ethically justifiable) or with a control procedure that is available as a current treatment standard.
Subjects are not allowed to decide for themselves whether they want to belong to the treatment or control group, but must be randomly assigned so that the groups are truly equal to minimize the effects of all distortive individual factors. Neither doctors nor patients must be able to recognize who gets the procedure to be tested and who gets the control procedure.
After an appropriate period of testing, pre-determined parameters which indicate that the treatment has had an effect should be checked.
These rules are now accepted by all medical societies as the so-called gold standard. The studies are called RCTs, meaning “randomised controlled trial”.
About the limits of clinical trials
So much for the theory. In practice, many difficulties arise: RCTs are not always possible and by far not all RCTs are really reliable, although they formally meet the criteria. Even though the term “study”, especially in public, is often equated with “proof”, one has to look closely at many more aspects in order to be able to judge how meaningful a study really is. Loosely speaking, on the one hand must be examined, which formal criteria the study fulfils and, on the other hand, how well the study is then carried out. For comparison: not every hostel is a luxury hotel. On the one hand, it has to fulfil the formal criteria to earn the necessary points and, on the other hand, it has to be managed as well as can be expected from a luxury hotel. The advocates of EbM know about these difficulties and so they do not claim to proclaim “the truth” based on studies, but to have the best tools to come as close as possible to the “truth”, i. e. the circumstances of the real world.
There is another difficulty: according to the rules of evidence-based medicine, it is much more difficult and costly, some even say impossible, to prove the ineffectiveness of a procedure or a medicine in a study than not to prove its effectiveness. Just as it is difficult to show that a certain species of butterfly is extinct, but easy to prove that it is not, because all you have to do is find one specimen.
What does it mean “to prove”?
For medicine, the difference is huge: if the “ineffectiveness is convincingly proven”, one may say: it is proven that the procedure does not work. All further attempts are superfluous and the case is closed. If, however, the “efficacy is not proven”, one may only say: It is not proven that the procedure works. Further trials may be necessary and the case is still open. An “efficacy not proven” then easily becomes an “efficacy not yet clearly proven” in further publications, so that in the end the message reaches the layperson – despite negative study results – that the final proof of efficacy seems to be actually only a formality – this is how things are going with homeopathy at the moment.
The pitfalls of statistics
There is another problem: Purely statistically, a positive result is always to be expected at some point. Differences in the results between the treatment and the control group are defined as “statistically significant” if the statistical evaluation shows a more than 95% probability that the difference is not random but a consequence of the treatment. Thus, 5 out of 100 studies (1 of 20!) will produce “statistically significant positive” results, even though they are random.
These problems show that even the best medical studies are prone to error. Nevertheless, EBM as a (worthful) pure statistical measure method does not accept any other possibilities of knowledge. Even laws of nature and other safe findings are ignored. It is one of the iron principles of EBM not to ask how something works, but only whether it works. The result: even procedures such as the administration of homeopathic medicines, which cannot be reliable because of fundamental conflicts with substantiated findings in physics, chemistry, physiologie and pharmazeutics are subject of clinical trials. On the one hand, it is clear that the whole view on scientific knowledge can’t be leaved out. On the other, the error susceptibility of clinical trials mentioned above explains why there occur apparent positive results, even they are inevitable. This leads to a great confusion – but doesn’t bring the discussion to an end. Despite of homeopathy in 200 years (!) wasn’t able to prove one thing: an effectiveness about placebo – claiming there were a need of “further reseach” because the ineffectiveness of the method isn’t proved at all …
Learn more on our website (in English):
Abput placebo and other contextual effects:
The “Cum hoc ergo propter hoc” fallacy of homeopathic drug testing:
Why people like to believe in homeopathy: