Inside the data fog – a new review on homeopathy

Lesedauer / Reading Time: 4 min

Zusammenfassung

Neuer Review zur Homöopathie - glaube keiner Statistik, die Unmögliches beweisen will!
New review on homeopathy - don't believe statistics that try to prove the impossible!

‘Treaded curds grow broad, not strong!’

This in Germany proverbial Goethe quote fits perfectly with a new systematic review that has been passed around for some time by the supporters of homeopathy as the ultimate proof of the effectiveness of homeopathy. And the citation is completely justified, because this work contains nothing new, but merely repeats studies that have long since been filed away – the positive result, however, is based on a serious methodological error.

In this paper published in 2023 by Hamre et al [1], systematic reviews and meta-analyses that have been known for some time are subjected to a systematic review themselves. However, this was limited to reviews in which placebo-controlled, randomized comparative studies on efficacy for any indication in humans were analyzed. This includes six studies: Linde_1997, Linde_1998, Chucherat_2000, Shang_2005, Mathie_2014 and Mathie_2017. It is not explained why the other five systematic reviews were not included, as a review without meta-analysis, such as the work considered here, could easily have been applied there as well.

Admittedly, a small positive effect for homeopathy was found in each of the six systematic reviews selected for the new study. However, this does not justify the conclusion that homeopathy is effective, given the poor quality of the studies, which was also emphasized by the individual authors themselves[2]. Since the other reviews lack this at least slightly positive approach, the authors’ motivation for limiting themselves to the selected six studies should be sufficiently clear.

Data-Siberian

The first step is to analyze the data. Really analyzed, it couldn’t be more thorough. Over twenty (!) pages of the publication and dozens of pages in additional files, methods are explained, data extracted, presented in tables and then described in prose. There is hardly a number that is mentioned in the texts of the reviews under consideration or other countable characteristics that are not reported. The result is an almost unmanageable jumble of all kinds of data and evaluations. Unfortunately, their presentation is in no way illuminated by addressing what these surveys are supposed to be relevant for or what they are supposed to prove. It is work for a penal colony to read all this in detail. Data-Siberian.

Suddenly positive?

At some point on page 21 you come across the ‘main findings’ and find the statement that homeopathy has a positive significant effect for all indications. Apart from the fact that this result is wrong – see below – it is almost impossible to find the pin of evidence for this statement in the preceding haystack of information. However, the fact that the authors of all six reviews considered the quality of the individual studies to be insufficient to derive an efficacy of homeopathy from them is missing. And thus a critical appraisal of the results found, as Cochrane calls it, the core element of summarizing considerations.

However, this is not necessary here, as the result is based on an unsuitable methodology that distorts the results to the positive.

The purpose of summarizing reviews

Systematic reviews and meta-analyses are usually carried out in order to summarize several available study results and thus possibly reveal effects more clearly that are not recognizable in individual studies due to small numbers of participants (beta error[3]). Or to find contradictions in the results and obtain an overall picture.

Various individual studies are combined and the review or meta-analysis then presents a result as if the total number of participants from all individual studies had been considered within a single investigation. The increased number of participants reduces the sampling error, i.e. the mean values of the pooled groups (placebo and verum) are closer to the mean value of the population, the confidence intervals become smaller and effects emerge more clearly from the statistical noise. This is expressed by the fact that even a smaller effect size leads to a significant (i.e. probably non-random) result in the summarized review than in the individual studies with smaller numbers of participants.

To illustrate: If in 10 coin tosses the coat of arms comes up seven times and the tails three times, this could be a coincidence. With 100 coin tosses, according to the law of large numbers, the ratios should approach the actual probability, i.e. the results should correspond more and more to the half-and-half distribution, perhaps 55 to 45 or so. However, if after 100 coin tosses it turns out that coats of arms come up 70 times and tails 30 times, then this is certainly no longer a coincidence, but indicates a systematic external influence.

The effect of a review is therefore to combine more participants in order to make the effect more obvious.

But what are the authors doing here?

They are not summarizing studies, but systematic reviews, which in turn had examined individual studies. But, and this is the main difference to ‘real’ reviews: there is a large area of overlap in which some individual studies are included in several of the reviews examined.

The authors themselves describe that a total of 182 different individual studies were included in the reviews examined, but that these occur 310 times in total – and were included in this new review in this number. (‘All following descriptions refer to these 310 trials’). Individual studies, e.g. Jacobs’ work from 1994 on childhood diarrhea in Nicaragua, are included up to four times. In total, the included meta-analyses included 5 studies four times, 24 studies three times and 65 studies twice. The result determined by Jacobs on 92 participants is considered in the analysis as if it had been found on 368 test subjects. In the coin example, this would mean that the 100 tosses did not actually take place, but that the result from the 10 tosses was simply multiplied by 10. However, the multiplication result does not contain any new information, so the law of large numbers does not apply. It is just as much a coincidence as the initial result – even if it is multiplied by a large number.

Whether you carry out a new meta-analysis or, as Hamre et al. do without it and serve up a data stew, is irrelevant: the study population being discussed does not exist! It is only artificially extrapolated.

And accordingly, the result for this review is not valid.

An overall review would be methodologically correct if the 182 individual studies had been introduced and evaluated individually in a review, and the quality of the individual studies had also been considered and presented.

However, this is not the case. Instead, only the raisins were picked out of the existing reviews on homeopathy and these were rolled out to the best of their ability, resulting in broad raisins but no strong ones – as with curds.


[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10559431/

[2] https://www.homöopedia.eu/index.php?title=Artikel:Systematische_Reviews_zur_Homöopathie_-_Übersicht (in German)

[3] Beta error: An actual effect is not recognized because it is too weak and cannot be detected in the scattering of the data. The opposite is the alpha error, in which a random accumulation of data is mistaken for an effect.


Picture credits: Udo Endruscheit for the INH / Pixabay

One Reply to “Inside the data fog – a new review on homeopathy”

Comments are closed.

Top