Outcome Measures for Patients with Lower Limb Amputations: Difference between revisions

No edit summary
No edit summary
Line 65: Line 65:
Here are some examples of studies where the clinimetric, sometimes called psychometric, properties have been reported in an amputee population and what the results may tell you.<br>  
Here are some examples of studies where the clinimetric, sometimes called psychometric, properties have been reported in an amputee population and what the results may tell you.<br>  


=== Reliability<br> ===
=== Reliability<br> ===
 
Reliability is usually measured by Intra-class correlation coefficients (ICC) and is presented as a number between 0 (no consistency) to 1 (complete consistency)(9).<br>'''Intra-rater Reliability: '''This indicates how consistently a rater administers and scores an outcome measure. <br>'''Inter-rater Reliability: '''This indicates how well two raters agree in the way they administer and score an outcome measure.<br>'''Test-retest reliability:'''If an individual completes a self-report survey and then repeats the survey on a second occasion when no change is expected, the results should be similar.<br>
 
*Brooks, Hunter et al (2002) examined the reliability of the 2MWT(10). Participants completed 2 successive timed walks measured by 2 different raters on 2 consecutive days. Intra class correlations (ICC) were &gt;0 .98 showing excellent intra- and inter-rater reliability.
 
'''Measurement error:'''This is the degree to which scores or ratings are identical irrespective of who performs or scores the test and can be reported using the standard error of measurement (SEM)or minimal detectable change (MDC), which is the same as smallest detectable change (SDC)(11).<br>
 
*Deathe&amp; Miller (2005) reported the SEM in absolute values, which was 3sec for the L-Test(12).
*Resnik&amp; Borgia (2011) also reported MDCin absolute values for all the measures they studied: 2MWT (34.3m), 6MWT (45m), TUG (3.6s) and AMP (3.4pts)(13).
 
'''Internal Consistency:'''This reliability property is reserved for outcome measures that are designed to test only one concept. Internal consistency assesses the extent to which all items or question in an outcome measure address the same underlying concept, e.g. in a mobility scale, all the items should deal with mobility(5).<br>There are two main methods used to report internal consistency,'''the Classical Test theory '''uses Cronbachs alpha (α) to indicate the reliability of an outcome measure as a whole. And '''the''' '''Item Response Theory''' uses Rasch Analysis to assess internal consistency by looking at each item within the outcome measure(14).<br>
 
*The internal consistency of the ABC scale was considered excellent as measured by Cronbachs alpha (0.93) in a study by Milleret al (2003)(15).
 
Rasch analysis was used to examine all the items in the Berg Balance Scale which confirmed that it was able to test a range of difficulty and identify four levels of ability(16).
 
<br>
 
=== Validity<br> ===
 
<br>


== Sub Heading 3<br>  ==
== Sub Heading 3<br>  ==

Revision as of 09:06, 17 February 2015

Welcome to WCPT Network for Amputee Rehabilitation Project. This page is being developed by participants of a project to populate the Amputees section of Physiopedia. 
  • Please do not edit unless you are involved in this project, but please come back in the near future to check out new information!!  
  • If you would like to get involved in this project and earn accreditation for your contributions, please get in touch!

Tips for writing this page:

Aim:

  1. To enable the reader to select appropriate outcome measures to demonstrate effective intervention. (See CSP Outcome Measures Toolbox)

A quick word on content:

Content criteria:

  • Evidence based
  • Referenced
  • Include images and videos
  • Include a list of open online resources that we can link to

Example content:

Original Editor - Add a link to your Physiopedia profile here.

Top Contributors - Sheik Abdul Khadir, Lucy Aird, Admin, Tarina van der Stockt, Kim Jackson, 127.0.0.1, Simisola Ajeyalemi, Lauren Lopez and Rachael Lowe  

General introduction[edit | edit source]

Outcome measures can be used for many different purposes. A predictive measure should be able to classify individuals according to a set of pre-defined categories either concurrently or prospectively e.g. whether an amputee will use a prosthesis successfully(1)(2). Detecting differences between people or groups demonstrates the discriminative value of an outcome measure e.g. being able to determine the different abilities of a trans-tibial or trans-femoral amputee or differences between prosthetic components from scores or times recorded(3). Whereas an evaluative measure should be able to detect changes, usually over a period of time in an individual or group. An evaluative outcome measure may also detect changes occurring following some kind of intervention, e.g a therapy programme(4) or provision of a prosthetic component. Some outcome measures are designed to do only one of the above, while others may do a combination, though some of the requirements of these different types of outcome measures are competing(5). Whichever purpose it is designed for, the psychometric properties of the outcome measure need to be reported to satisfy the user that it is fit for purpose with the population they wish to use it(6). The psychometric properties of an outcome measure are the characteristics that express it’s adequacy in terms of reliability, validity and responsiveness. Another term often used is clinimetric properties. While being developed from similar origins as psychometrics, clinimetricshas been described as the practice of assessing or describing symptoms, signs, and laboratory findings by means of scales, indices, and other quantitative instruments, all of which should have adequate psychometric properties(7)(8).

Considerations before choosing an outcome measure[edit | edit source]

If you are considering using an outcome measure with an amputee it is worth asking yourself the questions posed on the Outcome Measures page here in Physiopedia (create link to the relevant page).At the very least you should consider these questions with your amputee patient or group in mind.

Why am I using an outcome measure?

  •  Am I trying to establish a baseline measure from which I can monitor changes over time for an individual patient?
  • Am I trying to predicthow my patient is going to perform? 
  • Am I trying to evaluate the impact of a treatment programme or prosthetic component on an individual or a group?
  • Am I trying to evaluate the needs of the amputee attending my service?
  • Am I trying to evaluate how my service is responding to needs of the amputee?

What am I aiming to measure?

  • Impairments of body structure and function?
  • Activity limitations?
  • Participation restrictions?
  • Quality of life?
  • Something else?

When you think you may have an outcome measure in mind you should also consider these questions.

Have the clinimetric properties of the outcome measure I am considering been measured in a population similar to mine?

  • Is the outcome measure reliable?
  • Do I know the rate of error detected with scores?
  • Do I know the minimum detectable change?
  • Is the outcome measure valid?
  • Does it measure what I want it to measure?
  • Is the outcome measure responsive to change?
  • Is there a known minimum clinically important difference?

Here are some examples of studies where the clinimetric, sometimes called psychometric, properties have been reported in an amputee population and what the results may tell you.

Reliability
[edit | edit source]

Reliability is usually measured by Intra-class correlation coefficients (ICC) and is presented as a number between 0 (no consistency) to 1 (complete consistency)(9).
Intra-rater Reliability: This indicates how consistently a rater administers and scores an outcome measure.
Inter-rater Reliability: This indicates how well two raters agree in the way they administer and score an outcome measure.
Test-retest reliability:If an individual completes a self-report survey and then repeats the survey on a second occasion when no change is expected, the results should be similar.

  • Brooks, Hunter et al (2002) examined the reliability of the 2MWT(10). Participants completed 2 successive timed walks measured by 2 different raters on 2 consecutive days. Intra class correlations (ICC) were >0 .98 showing excellent intra- and inter-rater reliability.

Measurement error:This is the degree to which scores or ratings are identical irrespective of who performs or scores the test and can be reported using the standard error of measurement (SEM)or minimal detectable change (MDC), which is the same as smallest detectable change (SDC)(11).

  • Deathe& Miller (2005) reported the SEM in absolute values, which was 3sec for the L-Test(12).
  • Resnik& Borgia (2011) also reported MDCin absolute values for all the measures they studied: 2MWT (34.3m), 6MWT (45m), TUG (3.6s) and AMP (3.4pts)(13).

Internal Consistency:This reliability property is reserved for outcome measures that are designed to test only one concept. Internal consistency assesses the extent to which all items or question in an outcome measure address the same underlying concept, e.g. in a mobility scale, all the items should deal with mobility(5).
There are two main methods used to report internal consistency,the Classical Test theory uses Cronbachs alpha (α) to indicate the reliability of an outcome measure as a whole. And the Item Response Theory uses Rasch Analysis to assess internal consistency by looking at each item within the outcome measure(14).

  • The internal consistency of the ABC scale was considered excellent as measured by Cronbachs alpha (0.93) in a study by Milleret al (2003)(15).

Rasch analysis was used to examine all the items in the Berg Balance Scale which confirmed that it was able to test a range of difficulty and identify four levels of ability(16).


Validity
[edit | edit source]


Sub Heading 3
[edit | edit source]

Add text here...

References[edit | edit source]

References will automatically be added here, see adding references tutorial.