twitter: @rolygate

About Clinical Studies And Evidence

Clinical trials and studies provide us with evidence to support theories, and the weight of evidence depends on a number of factors. The same applies to laboratory tests. The factors affecting how the results should be interpreted include the size of the effects demonstrated, the apparent accuracy of the procedures used, the funding source, the history of the researchers, and the repeatability of the results.

Repeatability is a major consideration. In the end it is the most important factor, since the same result obtained by different researchers with different funding sources and different agendas means that it appears reliable. All researchers have an agenda, even if it has no apparent effect on the results obtained.

Evidence is supplied by a reliable result. 'Reliable' is not the same as 'proof'; a repeatable result is reliable, and 'good evidence'. Proof cannot be supplied by clinical studies, even if numbered in the hundreds: proof is obtained from real-world observations, such as national health statistics.

Trials, studies, and lab tests provide evidence - but that evidence may be flawed; a study is simply work that should be taken into account.


  • A clinical trial is a project that involved subjects who were subjected to specific regimes and subsequently measured in some way
  • A study is an examination of trials, surveys or other data
  • A laboratory test is an analysis of materials
  • A meta-analysis is a study that measures the compiled results of multiple studies
  • A 'study' may also be used as a generic term to mean any of the above

How a result is interpreted
It is generally accepted that a study of a phenomenon (such as a treatment, activity or consumption of a material) that shows less than a 1% effect cannot be stated as identifying an effect; a study that shows a result of between 1% and 2% has identified an effect, but it is not strongly demonstrated; a study that shows a result of 3% or greater indicates a possibility of a clinically significant effect.

Multiple studies need to show the same result for it to be well-demonstrated. Multiple studies that repeatedly show an effect of less than 1% strongly demonstrate the effect cannot be reliably identified; multiple studies that repeatedly show a result of between 1% and 2% strongly demonstrate a small effect that is not clinically significant; multiple studies that repeatedly show a result greater than 3% demonstrate a clinically significant effect.

In some cases, obvious bias is apparent. The most common bias is evidence of influence of the funding source together with an apparent need to secure future employment for the researcher.

Ideological bias or evidence of funding influence (or both) can be found in work that purports to be scientifically valid. The amount of research in general that demonstrates the influence of an obvious conflict of interest is unacceptable; where it has a commercial purpose, such as to suggest efficacy when the real-world result is not likely to replicate the conclusions of the trial, it can justifiably be described as fraudulent.

Publication in a journal, or peer-reviewed publication, has no relevance whatsoever to the validity of the research or the accuracy of the evidence it reports: some of the most corrupt research ever perpetrated has been published in the most renowned peer-reviewed journals, and the uncritical acceptance of such research has undoubtedly cost thousands of lives.

Some research is clearly junk science funded for propaganda purposes. Indeed, there is a rather troubling trend to use research purely for press release value: either junk science is funded purely for its value in basing a press release on; or a crooked press release is constructed that purports to represent a new study's findings before it is released, but is later revealed to have contained blatant misinformation based on isolating part of the data. Or, of course, the old favourite: construct a press release with a fabricated version of the study's findings.

The junk science scale

Agenda-based research can be placed on a scale.

  1. Junk science
  2. Bogus science
  3. Fraudulent science

Clinical trials are easy to rig in order to benefit the funder and get the right result. There are multiple ways to suppress unfavourable results; here are some of the methods used:

  • Pre-trial subject selection
  • Use young not old subjects
  • Group de-selections
  • Individual de-selections
  • Drug washouts
  • Placebo washouts

The rigging of clinical trials is so prevalent it even has its own terminology. The pre-trial system of identifying suitable and unsuitable candidates is the key to suppressing unwanted results such as negative outcomes or low success rates.

Preliminary trials are run in order to locate the best subjects for the documented trial. The first trial or trials identify successful or unsuccessful subjects in terms of the drug or placebo effects, and only the successful subjects go forward to the second trial. The final trial is the one fully documented, and it gets the desired result since all failures were eliminated first; negative outcomes are not significant because they were mostly eliminated before the trial.

The use of younger subjects tends to accentuate desired effects and suppress negative ones. Subjects in a group who are unlikely to show the desired result are removed. Individuals who show no desired effect or who show an undesired effect are removed. Those who react negatively to the drug tested are removed. Those who identify the placebo too easily are removed.

These methods can be applied in pre-trials, or if the documented trial is too carefully monitored, previous experience with the subjects can be used. Persons who take part in multiple trials are valued for this reason, and for the fact that pre-trials for a specific purpose can be masked among multiple other trials.

Reliability of evidence
Clinical trials are easy to rig for the desired result; studies are easy to misrepresent by press release; laboratory testing can produce any result required by the funder. Even meta analyses of numerous studies can be skewed, and perhaps more easily than the trials themselves, for statistics can easily be massaged to get the 'right' result. Or more commonly perhaps, they may simply consist of poor work - since the accurate interpretation of medical meta statistics probably demands more skill than rocket science.

Thus, 'evidence' is something of an ephemeral concept until, perhaps, it is confirmed by national health statistics.

To be perfectly honest, everyone has a COI of some kind, and researchers are just like the rest of us. To identify genuine evidence from unbiased research may be an art; the researcher's reputation is crucial.


created 2013-12-29
latest update 2014-04-21