Zuletzt aktualisiert am
“There is no scientific evidence that homeopathy works.”
It is evident that homeopathy associations are dealing with critical arguments more intensively than before, e.g. with the work of the Information Network on Homeopathy (INH), a Germany-based team of doctors, pharmacists and other scientists with a skeptic’s approach on homeopathy. The Homeopathy Research Institute (HRI) on the other hand is a London-based organization that promotes more homeopathy research and tries to present homeopathy as scientifically proven.
On the HRI website, you can find a menu item “FAQ Homeopathy”, where they try to argue some of the key statements made by homeopathy critics. The articles are presented in English, German and a few other languages. These articles seem to be meant as templates to refute critics like us, and they were used for a small brochure by DHU (Deutsche Homöopathie Union), which will very likely be distributed at the various lectures and training sessions DHU promotes. DHU by their own account is the leading manufacturer of homeopathic preparations in Germany, offering and sponsoring trainings for doctors, pharmacists, midwives etc., not to forget their support for a lot of health-related websites targeted at a lay audience.
In a short series of articles, we want to examine how and why we homeopathy critics come to our conclusions, and what to think of the counter-arguments of homeopathy promoters. Our articles will be published in three text versions in German, which will be posted simultaneously on my blog, on the website of the INH, and at Susannchen; and if these pieces are received favorably, we will translate the most detailed version to English, too (this is what you see here).
Background: Why do we need scientific evidence of efficacy?
It certainly is of advantage if the doctor you consult for your complaints uses a therapy that they know is suitable for having a beneficial effect on your complaints. The therapist’s personal experience is certainly not enough to establish the efficacy of a therapy – not to mention the question of what happens to the patients on whom the experience is gathered by trial and error.
After all, it is not so easy to identify if a therapy is successful. One of the reasons is the pronounced tendency to self-healing, without which we would have gone extinct as a species long ago. This means that people (or animals or plants) become healthy and may recover even from life-threatening conditions without the help of medical interventions. Even in severe infectious diseases, there are people who survive them. If a doctor administers a therapy and the patient recovers afterwards, it is not certain that it was the therapy that caused the improvement. In pre-scientific times, for example, a lot of things were considered effective cures that today seem outright bizarre, like using the dust of church bells, crushed sketches of saints, or the fat and bones of executed criminals. Assuming that not all doctors from the Middle Ages up to the middle of the 19th century were charlatans, these people were convinced that they could actually heal with their cures, simply because they saw that patients became healthy afterwards, sometimes maybe even despite their interventions. The gathering of experience by the therapist may lead to misconceptions about the power of his cures.
It is obvious that medicine will improve – and has done so in the recent past – by identifying and discarding ineffective therapies. Doctors of today have a wide range of effective methods at their disposal, with data on the conditions they may be useful for, and the probabilities of success, or they can find out risks and benefits. Waiving evidence of efficacy would mean a step backwards into the pre-scientific era, when it was more or less a matter of good luck whether the doctor prescribed an actually helpful medicine.
Background: What is scientific evidence?
Evidence of efficacy requires a drug to show an effect in patients which is established by scientific experiments. Such trials are performed on a larger number of patients so that the results are not distorted by individual patients’ characteristics. The participants are randomly divided into two groups, one of which takes the drug to be tested (“verum group”), the other a placebo without any active substance (“control group”). It is important that the patients themselves are “blinded” and do not know whether or not they received the actual medication. The same holds for the doctors or caregivers, so that all the patients receive the same care. Scientifically sound evidence of efficacy will be based on such studies. This is the gold standard of evidence, the placebo-controlled, randomized, double-blinded clinical trial (PCT). This study design is suitable for the individualized therapy approach of homeopathy as well. There are a number of such studies (also systematic reviews of them): all subjects go through the initial consultation and a drug is prescribed. In the pharmacy either the prescribed drug or a placebo is randomly supplied to the patient.
If the remedy under test is effective, the outcomes between the two groups will differ, but it is necessary to determine whether the difference observed really is likely to be caused by the drug. Even in a control group that actually remains untreated, improvements occur which are evidently not caused by the drug. This may be due to the self-healing powers of the immune system, natural self-limiting disease propagation, or the often-cited placebo effect, in which the patient’s expectations and trust in the cure cause an improvement in the patient’s condition, or at least they strongly promote the improvement.
Since both the placebo and verum group usually show improvements in symptoms, the result must be evaluated with statistical methods to check if different outcomes between the groups may have been caused by chance alone, due to allocation of the patients to their groups. So the result of a clinical study is not a clear yes-or-no. Only if the probability that the difference between the groups has been caused by chance is below a scientifically agreed threshold of 5%, it is concluded that the drug may have caused the difference, and hence may be having an effect.
However, even ineffective drugs may produce improbable results occasionally, that look like a telltale sign of a less than 5% probability, but still are a product of mere chance alone. Therefore a single positive study cannot be a scientific “proof”, but it must be replicated independently – at least by a different team of researchers with a different set of patients. Almost certainly there will be fluctuations during replications, some may even yield quite different results. To arrive at a final conclusion from a set of trials, a systematic review is required, in which all published studies on a particular cure for a given clinical condition are examined and an overall result is determined. It is important that all available studies are taken into consideration, not only the positive ones. Such a review would then be considered reliable evidence if the data basis is adequate – but still would not be considered an eternal “proof” in the sense that it’s impossible for this conclusion to be wrong. Individual studies can only provide a more or less strong indication of potential efficacy.
To note: Even if a systematic review would yield positive findings for homeopathy, this could establish the effectiveness of homeopathy for the specific condition under investigation only. No study or review can prove the efficacy of homeopathy in general.
Background: Why are case studies and individual reports not proof of an effect?
Due to the self-healing powers already referred to, which certainly exist at different levels in different individuals, there are always people who can improve even in serious conditions without any cure – people that can survive the most serious infectious diseases or even cancer, for example. There may be only a few of them, but they do exist. “Mortality of 80% when untreated” indicates that 20% of untreated people survive. As a result, there will always be patients who can report that they have (supposedly) been cured because of this or that therapy, or because of whatever they did – like prayer, or promising to be good in the future, or erecting a church, or whatever –, even if it did not contribute anything real to progress their recovery. On the other hand, there always will be people who do not respond to an otherwise effective treatment and who may die from a disease despite a usually successful cure.
Consider the results of a recent study on alternative medical treatment of cancer. The results seven years after diagnosis and after alternative or conventional treatment looked like this (data transformed to meaningful numbers and entities) :
|Patients after seven years||Alternative Therapy||Conventional Therapy|
From the 144 patients who survived for seven years under the alternative therapy, certainly a whole lot of case studies can be derived, all of which indicate the success of alternative medicine. You may even refer to the 72 patients that died under conventional cancer therapy to stress the point of alleged superior performance of the alternative therapy.
Even if positive cases under conventional therapy may find their way into case studies as well: what for sure will remain unnoticed is the fact that considerably more of the conventionally treated patients survived than under alternative treatment. And for sure there will be very few case studies of the deaths under alternative treatment, if any at all. At least the author of this piece is not aware of any such case studies from alternative medicine that had a negative outcome. But this does not mean, they do not exist – only that the therapist is unwilling to share his failure.
The fact that considerably more patients under alternative medicine died than in the conventional group is completely lost in case studies. After all, those who have died in excess are lying in their graves and do not boast their failure in talk shows, books, or interviews.
After all, it is hardly to be expected that therapists will publish their failures in case studies and disseminate them as widely as their positive results advertising them with the same rigor: A very important point to assess the effectiveness of a therapy is therefore missing. That would be like counting only the goals that your team scored – forgetting they received a lot more from their opponents.
In a nutshell: Case studies of successful treatments, even in large numbers, only show that there are a number of people who recover under the alternative treatment – and nothing else. There is no denying that these cases exist, but this is no indication if there might be many more patients who did not fare so well. But the real patients and their therapists would need some information on the chances of recovery – which includes the ratio of failed treatments.
Facts: What evidence is available?
HRI states that by the end of 2014 there have been 189 randomized controlled clinical studies, 104 of which compared homeopathy with placebo. Of these, 43 allegedly showed positive results for homeopathy, 5 were negative, and 56 were “unclear” – which brings them to the conclusion, that there are more positive than negative studies which is considered to indicate some efficacy of homeopathy.
What’s strange: Apparently, the authors of this HRI article consider the fact that the efficacy of a homeopathic remedy is “unclear”, that is it cannot be reliably distinguished from a placebo, i. e. it works in the same way as a piece of sugar, is not a negative result. That’s quite amazing. One wonders whether patients who spend their money on an ineffective, sugar-like remedy and perhaps hope for an improvement of their condition would see it the same way?
In any case, by no means the majority of studies have yielded positive results.
The ratio of more than 40% of successful studies looks impressive at first, but is misleading still.
First of all, because of the probability of the false-positive results of 5%, it is to be expected that there will be some positive studies, which are nevertheless random results.
Then there is the so-called publication bias, also known as file-drawer effect: the fact that positive results are readily and willingly published, whereas negative ones tend to remain in the file-drawer forever, never to be spoken or heard of again.
In addition, there are shortcomings in the published studies, such as inadequate blinding of the test subjects, or inappropriate methods of evaluation, which can lead to the results being skewed in a positive direction, and a lot more issues that render a study of poor quality. All of this increases the proportion of positive studies in the database.
As already explained, evidence for efficacy of homeopathy for any indication can only be derived if studies have been independently repeated and the results of all of them pooled in a systematic review. It goes without saying that this review should cover the entire available evidence, not just the positive results.
Such reviews do exist indeed. Starting with a work by Kleijnen et al. from 1991  to the recent work by Mathie et al. in 2019 , there are twelve major reviews, which examine homeopathy across indications, and all more or less yield the same result: On first glance the evidence in total may indicate that there could be small effects above placebo, but the quality of the available studies is so poor that no reliable conclusions can be drawn. Neither for homeopathy in general nor for any indication is there reliable evidence that homeopathy outperforms placebo. The largest review published so far, the one by the Australian Ministry of Health in 2015, comes to this conclusion , as does Mathie whose affiliation is the Homeopathy Research Institute.
There is in fact no scientific evidence that homeopathy works.
What homeopaths tell us about it
The HRI muses that 43% of positive studies are the same ratio of success as in conventional medicine. So what? What is such a comparison supposed to show? If I want to compare my literary skills with those of Charles Dickens, it is certainly not helpful to ascertain that my ratio of torn and discarded pages may be the same as his. It would be important to compare what remains – in terms of quality of course, not quantity. Or does this article sound anything like David Copperfield?
Such an approach based on the motto “Who won?” is absurd. In addition, homeopathic studies are confirmation research, i. e. the search for a positive result. In the lore of homeopathy there is no need for clinical trials, and in Europe homeopathic preparations are exempt from providing data on evidence in the recognition process to become a medicine to be sold in pharmacies only. This may well lead to an increased confirmation bias. Compared to real research you should expect the positive evidence might be expected to be exaggerated due to lack of scientific skepticism. Taking into account the strong claim of homeopaths that their treatment is on the same level as conventional medicine – if not better –, you should expect a much more convincing database of positive evidence than there is available today. You should expect the homeopaths to take great pains to explain how the negative results came about; their ratio of successful trials looks very poor for such a powerful treatment.
Another aspect is raised, namely the lack of public funding for research into homeopathy. Please note the following: DHU belongs to the Dr. Willmar Schwabe group, like the largest manufacturer of homeopathic medicines in Austria, Peintner. According to their website, Schwabe sells products for 900 million euros per year and spends a meager 32 million euros on research. Typical research spending for the pharmaceutical industry is about 14%, for Schwabe this would be around 125 million euros. There seems a lot of potential room for further research funding. [5, 6].
We critics of homeopathy maintain our position: there is no scientific evidence for an effect of homeopathy that exceeds placebo – and this is not due to the lack of money that could be invested in research.
Dr.-Ing. Norbert Aust,
Informationsnetzwerk Homöopathie (INH)
Thanks to Udo Endruscheit and Sven Rudloff for their support in preparing this english version.
Sources / References:
- Johnson SB, Park HS, Gross CP, Yu JB: ”Use on Alternative Medicine for Cancer and its Impact on Survival”; JNCI J Natl Cancer Inst (2018) 110(1): djx145, doi: 10.1093/jnci/djx145 [https://academic.oup.com/jnci/article/doi/10.1093/jnci/djx145/4064136]
- Kleijnen J, Knipschild P, ter Riet G: “Clinical trials of homeopathy“, BMJ 1991; 302:316-23, [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1668980/pdf/bmj00112-0022.pdf]
- Mathie RT et al.: Systematic Review and Meta-Analysis of Randomised, Other-than-Placebo Controlled, Trials of Non-Individualised Homeopathic Treatment.
- National Health and Medical Research Council. 2015. “NHMRC Statement on Homeopathy“, Canberra: NHMRC 2015 [https://www.nhmrc.gov.au/_files_nhmrc/publications/attachments/cam02_nhmrc_statement_homeopathy.pdf]
- NN: „Statistics 2015 – Die Arzneimittelindustrie in Deutschland“, vfa-Brochure, S. 10, [https://www.vfa.de/embed/statistics-2015.pdf]
- Website Fa. Wilmar Schwabe, Facts and Figures per December 31, 2016 [https://www.schwabepharma.com/about-us/facts-figures/]
Update, May 2019: Supplemented by the latest study situation (Mathie 2019)
Picture credits: Fotolia_130625327_XS