Skip to Main Content

Evidence-Based Practice

Step 3: Appraise the information found

The appraisal of evidence is crucial for evidence-based practice. 

Evidence from original and pre-appraised secondary studies can vary in quality because of different factors. 

Bias

Bias in research refers to systematic errors or deviations from the truth in the way data is collected, analysed, interpreted, or reported, which can lead to incorrect conclusions. Bias can arise at various stages of the research process and can influence results in favour of one outcome over another 

There are many types of bias that can impact research, a few of which are listed below: 

  • Selection bias: Those who have been selected to take part in a study are not representative of the population under study. For example, if a drug trial only includes participants under 20 years old, the results of that trial may not be applicable to older people. 
  • Recall bias: Participants may not accurately remember past event. For example, if researchers are asking study participants to recall how much alcohol they consumed over the past 2 months, participants may not recall correctly, leading to biased results. 
  • Racial bias: Systemic, institutional or interpersonal racism can influence the planning, methods, results and interpretation of research. For example, if a drug's effectiveness varies among racial groups and researchers attribute it to biological differences without considering social determinants of health, it can lead to biased interpretations of the results. 
  • Publication bias: Studies with more significant results are more likely to be published than studies with little effect. This can influence the body of literature, which may be biased toward particular conclusions.

 

Note

For further information on the many types of bias that can impact research check out Oxford’s Catalogue of Bias.

Once evidence is gathered from studies researchers must critically appraise each study. This is to ensure what was done, how it was done, and how well it was done in each study has credibility, significance, and relevance to their clinical question. 

When appraising evidence it is important to consider: 

  • Internal Validity: This assesses whether the observed effects in a study can be attributed to the intervention or treatment being studied rather than to other factors or biases. 
  • External Validity: This determines whether results of a study can be generalised to other populations, settings, or conditions.
  • Impact: This looks at the clinical significance of a study’s findings to determine whether they have the potential to influence clinical practice.

While internal validity, external validity, and impact each have distinct implications for the quality and reliability of research findings, they are interconnected and can influence one another.


Internal Validity

Internal validity refers to the degree to which a research study accurately measures the relationship between the variables it intends to investigate. It assesses whether the observed effects or changes in the study's outcomes are indeed caused by the intervention or treatment being studied, rather than by other unrelated factors or biases. 

Researchers must consider several factors when assessing internal validity, including: 

  • Study design: Experimental designs, especially randomized controlled trials (RCTs), are renowned for their high internal validity, as they allow researchers to determine whether a treatment genuinely leads to any changes. However, if researchers aim to delve into the lived experiences of participants undergoing treatments, a qualitative study would be more suitable for this purpose. 
  • Sample size: Larger sample sizes enhance statistical power and improve the reliability of findings. Small sample sizes may lead to unreliable results, as they increase the risk of chance findings and limit the generalisability of the study. 
  • Control of confounders: There may be factors not being studied that could influence the results, e.g. in a study looking at the relationship between exercise and weight loss, a participant's diet or health status might impact outcomes. These are called confounding variables. These variables should be controlled, for example, through statistical adjustment of the results. 
  • Blinding: Blinding is a methodological technique used to reduce bias and enhance internal validity. In double-blind studies, both participants and researchers are unaware of who receives the intervention and who receives the control treatment (e.g., placebo). Blinding ensures that participants' expectations and researchers' judgments do not influence the outcomes. 
  • Follow up / Attrition: Studies might involve follow-up assessments to track participants' outcomes over time. If participants drop out or become difficult to track, it can distort the results, as their experiences may differ from those who remain in the study.

External Validity

External validity refers to the extent to which the findings of a research study can be generalised to and applied in real-world settings, populations, and situations beyond the specific conditions of the study. It assesses the relevance and applicability of study results to broader contexts, populations, and settings. 

Researchers must consider several factors when assessing external validity: 

  • Representativeness of study sample: It's crucial for the study sample to accurately reflect the characteristics of the population of interest. A diverse and representative sample enhances the generalisability of the findings to the broader population. 
  • Similarity between study environments and real-world settings: The degree to which the conditions and settings of the study mirror real-world situations can impact external validity. For example, if a study was conducted exclusively under laboratory conditions, it may reduce the generalisability of the results in a real-world setting. 
  • Applicability to local contexts: Cultural, social, economic, and environmental factors can vary across different regions and contexts. Therefore, researchers should consider the relevance and applicability of study findings to specific local contexts or populations. Findings that are applicable and relevant to local settings are more likely to be adopted and implemented in practice.

Impact 

The impact of a research study evaluates its clinical significance and potential to influence real-world practices and decision-making processes. It examines whether the findings have practical implications for improving patient outcomes, guiding clinical interventions, or shaping healthcare policies and guidelines. 

Clinical Significance vs Statistical Significance 

Research papers often emphasise statistical significance, indicating that results deemed statistically significant are unlikely to have occurred by chance, typically due to sufficient sample sizes. On the other hand, clinical significance delves into the real-world impact of a treatment, gauging whether it delivers noticeable, practical benefits in everyday life. Research findings may exhibit statistical significance without clinical relevance, or conversely, clinical significance without statistical significance, or they may lack significance altogether, or exhibit both types of significance. 

  • Results are clinically significant and statistically significant: These results indicate a clear and meaningful impact that could inform practice, and based on statistical analysis they are unlikely to have occurred by chance. 
  • Results are statistically significant but not clinically significant: Statistically the results are ‘true’ (unlikely to have occurred by chance) but the difference is not large or important enough to justify change in clinical practice. 
  • Results are clinically significant but not statistically significant: The implications of this research hold significant potential to influence clinical practice, but more data is needed to ensure that results are not due to chance. 
  • Results are not statistically or clinically significant: The results could have occurred by chance, but either way they would not justify a change in practice.

Tools for appraising the information found

Various tools are available for assessing the quality of individual studies. These tools aid in ensuring that decisions are grounded in solid evidence relevant to the given situation. Checklists serve as guides for evaluating various aspects of different types of studies, including internal validity, external validity and bias. 

The following is a list of online tools to assist with appraising evidence: 

  • CASP
    The Critical Appraisal Skills Program (CASP) provide a variety of checklists covering different study designs including Randomised Controlled Trials, Systematic Reviews, Cohort Study, Diagnostic, Case Control studies, Economic Evaluation and Qualitative. For more information on using CASP for finding and evaluating evidence, check out these e-learning modules.
     
  • JBI
    The Joanna Briggs Insitute (JBI) provide checklists for common study designs, as well as tools for textual evidence such as Policies, Expert Opinion and Narratives. JBI checklists offer explanatory text for each of its criteria, which helps make them easier to understand.
     
  • CEBM
    University of Oxford’s Centre for Evidence Based Medicine (CEBM) critical appraisal tools can be used appraised the Randomised Controlled Trials, Systematic Reviews, Diagnosis, Qualitative and Prognosis studies, as well as Individual Participant Data reviews. CEBM tools are available in various languages.
     
  • AACODS
    Developed by Jessica Tyndall of Flinders University Medical Library, this tool is designed for evaluating and critically appraising grey literature sources.