Appraising the Quality of Knowledge Resources: Difference between revisions

No edit summary
No edit summary
Line 8: Line 8:
</div>  
</div>  
== Introduction ==
== Introduction ==
The third step in the evidence based practice process is appraising the quality of the resources found by creating a clinical question and locating the resources. Critically appraising a paper or study implies that you closely examine the results of the paper so that you can decide whether it is worthy of being used to inform your clinical practice. (Hoffman et al). Unfortunately, just because a study is peer-reviewed and published does not necessarily mean that is of good quality. (Ioannidis 2016) Clinical research is criticised as not useful for reasons such as: it is not sufficiently pragmatic (applicable to real-life scenarios), it is not patient-centred, transparent or feasible. Also, with so many research papers published it is challenging to decide which studies to use or not. This is where the critical appraisal of studies is advised.
The third step in the evidence based practice process is appraising the quality of the resources found by creating a clinical question and locating the resources. Critically appraising a paper or study implies that you closely examine the results of the paper so that you can decide whether it is worthy of being used to inform your clinical practice.<ref>Hoffmann T, Bennett S, Del Mar C. Evidence-based practice across the health professions. Elsevier Health Sciences; 2023.</ref> Unfortunately, just because a study is peer-reviewed and published does not necessarily mean that is of good quality.<ref name=":0">Ioannidis JP. [https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002049 Why most clinical research is not useful.] PLoS medicine. 2016 Jun 21;13(6):e1002049.</ref>  Clinical research is criticised as not being useful for reasons such as<ref name=":0" />:   


== How to Deal with Information Overload? ==
* it is not sufficiently pragmatic (applicable to real-life scenarios)
The vast amount of published information available creates two questions for rehabilitation professionals. (Benita)
* it is not patient-centred, transparent or feasible.  


1.       How do you ensure that you find what you need in the vast amount of available information
Also, with so many research papers published it is challenging to decide which studies to use or not. This is where the critical appraisal of studies is advised.


a.       Develop a well-thought through clinical question
== How to Deal with this Information Overload? ==
The vast amount of published information available creates two questions for rehabilitation professionals.<ref>Olivier,B. Appraising the Quality of the Knowledge Resources Course. Plus. 2023</ref>


b.       Use specific search terms  
# How do you ensure that you find what you need in the vast amount of available information?
 
## Develop a well-thought through clinical question ([[Evidence Based Practice and Patient Needs]])
2.       How do you know which findings are based on sound science and is applicable to your specific case or scenario?
## Use specific search terms ([[Locating the Knowledge Sources in Evidence Based Practice|Locating the Knowledge Resources in Evidence Based Practice]])
# How do you know which findings are based on sound science and is applicable to your specific case or scenario?


If there is a good summarised evidence based information available that answers your exact clinical question – use it, learn from it and apply it as you see fit with your clinical reasoning skills. Systematic review or clinical practice guidelines
If there is a good summarised evidence based information available that answers your exact clinical question – use it, learn from it and apply it as you see fit with your clinical reasoning skills. Systematic review or clinical practice guidelines

Revision as of 14:44, 28 November 2023

This article or area is currently under construction and may only be partially complete. Please come back soon to see the finished work! (28/11/2023)

Original Editor - User Name

Top Contributors - Wanda van Niekerk, Jess Bell and Kim Jackson  

Introduction[edit | edit source]

The third step in the evidence based practice process is appraising the quality of the resources found by creating a clinical question and locating the resources. Critically appraising a paper or study implies that you closely examine the results of the paper so that you can decide whether it is worthy of being used to inform your clinical practice.[1] Unfortunately, just because a study is peer-reviewed and published does not necessarily mean that is of good quality.[2] Clinical research is criticised as not being useful for reasons such as[2]:

  • it is not sufficiently pragmatic (applicable to real-life scenarios)
  • it is not patient-centred, transparent or feasible.

Also, with so many research papers published it is challenging to decide which studies to use or not. This is where the critical appraisal of studies is advised.

How to Deal with this Information Overload?[edit | edit source]

The vast amount of published information available creates two questions for rehabilitation professionals.[3]

  1. How do you ensure that you find what you need in the vast amount of available information?
    1. Develop a well-thought through clinical question (Evidence Based Practice and Patient Needs)
    2. Use specific search terms (Locating the Knowledge Resources in Evidence Based Practice)
  2. How do you know which findings are based on sound science and is applicable to your specific case or scenario?

If there is a good summarised evidence based information available that answers your exact clinical question – use it, learn from it and apply it as you see fit with your clinical reasoning skills. Systematic review or clinical practice guidelines

Steps in Appraising the Quality of Knowledge Resources[edit | edit source]

Before using the results of a study to help you with clinical decision-making process, it is important to determine if the study has used sound methods. Poorly designed studies may lead to bias and may also provide you with misleading results. (Hoffman et al)

Levels of Evidence[edit | edit source]

Levels of evidence (also referred toas hierarchies of evidence) helps in deciding which study type will provide the best evidence for a specific question. The heuristic ("rule of thumb") developed by the Oxford Centre for Evidence Based Medicine is a useful tool to refer to.

Study designs[edit | edit source]

Not all studies indicate the exact study design used. Read the methods section of a paper to determine which study design was used, but to be able to do that clinicians need to have a good understanding of the different study designs. The list below highlights some of the main study types that will help with evidence based practice.

  • Clinical practice guidelines
    • "set of healthcare recommendations developed by reviewing scientific literature and consensus from an expert panel"[4]
  • Systematic review
    • pooling together various primary or individual studies that fit pre-specified eligibility criteria in order to answer a specific research question. Systematic methods are used to reduce bias. Should have clear objectives and pre-defined eligibility criteria; reproducible methodology, systematic search to identify all eligible studies; assess the validity of the included studies; systematic presentation and synthesis of findings[5]
  • Descriptive studies
    • tries to provide an idea or picture of what is going on or happening in a specific population
    • describes the problem and not examining relationships or associations
    • Descriptive studies have PICO components such as Population and Outcome
    • Can include: case reports, case series, qualitative studies and surveys (cross-sectional studies)
  • Analytical studies
    • investigates the relationship between two factors - for example the effect of an intervention on the outcome
    • PICO or PECO components include interventions (I) or exposures (E) that are applied to different groups and compared (C)
    • can be experimental studies or observational analytic studies
    • experimental studies include randomised controlled trials
      • Randomised controlled trial - study participants are randomly assigned to either a treatment/intervention group or a control/placebo group
    • analytical observational studies include cohort studies; cross-sectional studies and case-control studies
      • Cohort studies – group of participants observed over period of time – effect of risk factors on an outcome Cross-sectional study – snapshot of what’s going on in a population at a specific time Case-control studies – two groups one with the outcome of interest and one without
  • How to Spot the Study Design
  • Three questions as per the Tree of design
  • Q1 - What was the aim of the study
    • to describe a popuplation - descriptive study
    • to investigate relationship between factors - analytic
  • Q2 - if the study is analytical, was there a random allocation of the intervention?
    • yes - rct
    • no - observational analytic
      • For this group o the main types of studies are dependent on the timing of the measurement of outcome
  • Q3 - When were the outcomes measured
    • some time after intervention - prospective cohort study
    • at the same time as intervention - cross-sectional study
    • before the exposure was defined or determined - case-control study (retrospective study)
  • Downloadable PDF by Jeremy Howick about study designs
  • Read more about the advantages and disadvantages of the designs here:





Link to design tree by Centre for Evidence-Based Medicine

Two arms of design tree – Descriptive and Analytical

Descriptive – Population and outcome  - no intervention

Analytical – PICO or PECO – Population, Intervention / Exposure, Comparison, Outcome

Descriptive studies -

Experimental studies – active involvement of the researcher, group may be given an intervention, another group is a control group and groups are compared

https://www.physio-pedia.com/Quantitative_Research

Randomised control trials – used to determine the effect of an intervention

Randomisation – definition – ensures that the unknown or confounding factors are equally distributed between groups

Concealed allocation –

Blinding

Types randomised parallel group or crossover design

GetReal Trial Tool

Observational analytical arm – researcher less involved than in experimental design


How to decide what study design was used

Centre of evidence based mecdicne  - series of questions

Grimes and Schultz image

Historically controlled studes

Mechanism – based reasoning

Case series

Critical Appraisal[edit | edit source]

Equator Network

CASP

How to Read a Paper[edit | edit source]

Sub Heading 3[edit | edit source]

Resources[edit | edit source]

  • bulleted list
  • x

or

  1. numbered list
  2. x

References[edit | edit source]

  1. Hoffmann T, Bennett S, Del Mar C. Evidence-based practice across the health professions. Elsevier Health Sciences; 2023.
  2. 2.0 2.1 Ioannidis JP. Why most clinical research is not useful. PLoS medicine. 2016 Jun 21;13(6):e1002049.
  3. Olivier,B. Appraising the Quality of the Knowledge Resources Course. Plus. 2023
  4. Conley B, Bunzli S, Bullen J, O'Brien P, Persaud J, Gunatillake T, Dowsey MM, Choong PF, Lin I. Core recommendations for osteoarthritis care: a systematic review of clinical practice guidelines. Arthritis care & research. 2023 Feb 10.
  5. Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC medical research methodology. 2019 Dec;19:1-2.