Home' Australian Pharmacist : Australian Pharmacist December 2013 Contents 38 Australian Pharmacist December 2013 I ©Pharmaceutical Society of Australia Ltd.
EVIDENCE BASED MEDICINE IN ACTION
How to read and interpret
clinical trials (part 2)
By Dr Ronald Castelino and Dr Luke Bereznicki
Evidence from clinical trials is the foundation of evidence-based health care which
along with the patient preferences, circumstances, and clinical experience is
central to effective clinical decision making. Clinical trials are prospective studies
generally designed to test the superiority of an intervention (e.g. treatment) as
compared with a control, on specified outcomes in defined patient groups.
Randomised controlled trials (RCTs)
have become the 'gold standard' for
assessing therapeutic interventions.
Additionally, trials also help to resolve
current therapeutic uncertainties and
provide frameworks for future decisions.
Hence applying evidence to clinical
questions requires filtering in the form of
2. importance, and
This article provides background and
tips on how to recognise quality trials
and focusses on evaluating the validity,
importance, and relevance of clinical
trial results. This month (part 2), we focus
on deciding whether study results are
clinically important and determining the
relevance of the results to your practice.
(Part 1 was published in the October issue on
Key questions to consider in
appraising clinical trials1-3
Are the study results clinically
Having established the validity, the next
step is to look at the results critically and
determine whether they are important for
Was the outcome of su cient importance
to recommend treatment to patients?
Clinicians should make their own
judgments about the clinical relevance
of outcomes. Health outcomes like death
and clinical manifestations of disease are
more tangible than non-clinical outcomes
or 'surrogates' (e.g. effects on laboratory
indices). 'Surrogate markers' are only useful
when meaningful correlations with clinical
events are well established.
Was the treatment e ect large enough to
be clinically relevant?
Besides reporting the finding of a
statistically significant difference, the
trial must also provide an estimate of the
magnitude of the treatment effect. While
these may be reported in many different
ways the expressions that are frequently
used are listed in Table 1.
What was the size of the treatment e ect?
The results are not clinically important unless
the effect is both statistically significant and
large enough to be clinically important.
Ideally, the effect of the intervention on
the primary outcome should be sufficiently
different from the effect of the alternative
that the average patient would have no
hesitation in making a choice.
What did the investigators consider
It is useful to examine the methods
section to see whether the authors have
defined a 'clinically significant difference'
and whether they used this difference to
calculate the sample size for their study.
If a clinician believes that the anticipated
effect size is not clinically important, even
statistically significant results would not be
Dr Luke Bereznicki is Senior Lecturer in
Pharmacy Practice and Acting Head
of School at the Tasmanian School of
Pharmacy. Dr Ron Castelino is a Lecturer in
Therapeutics and Pharmacy Practice at the
School of Pharmacy, University of Tasmania.
Was the treatment e ect precise?
Statistical tests are done in order to
determine whether a given result might
have occurred by chance. It is sometimes
difficult to be certain significant
differences were either excluded or
confirmed, that apparent differences
were reported when there were no real
differences (type I error), or that real
differences were missed (type II error).
The probability that chance alone might
account for apparent differences between
groups studied is often expressed in terms
of 'p' values. For example, a study might
suggest that a drug reduces mortality from
30% to 20% and report a p value of <0.05 as
evidence of this. This means that if the trial
were performed repeatedly, a difference in
outcome between the study groups as big
as this (10%) or larger could be expected
to occur by chance alone in fewer than
5% of the studies. A more useful guide
to probability is the confidence interval
(CI, usually 95%) because it shows a range
of results that might be expected if the
study were repeated frequently in the same
setting. If the CI is narrow, the study gives a
more precise estimate of the true value of
treatment. Better precision rates would help
eliminate the uncertainty while applying
the estimates from the trial to patients.
Did the study have adequate power?
The probability that by chance a study
will fail to detect a real, statistically
significant difference is often set at 0.1 or
0.2. Or simply put, the researchers accept
a 10% or 20% chance that a real treatment
effect exists but will remain undetected
(type II error). If the power of a study to
detect a difference is too low (e.g. <60%)
then adequately powered studies are
required to answer the clinical question.
Are the conclusions based on the primary
research question being answered?
It is important to determine if the
conclusion is based on the primary
outcome and be wary of trials that report
no difference in the primary outcome
but emphasise a statistically significant
secondary outcome. Remember that if
sufficient comparisons are made then
some will appear to be statistically
significant by chance (1/20).
Links Archive Australian Pharmacist Nov 2013 Australian Pharmacist January 2014 Navigation Previous Page Next Page