Interpreting a Qualitative Research Paper

Credibility and authenticity

 Critical appraisal and integrity

How can the researcher remained faithful

to participants’ experience? (Hammersley, 1992)

Introduction[edit | edit source]

Interpreting a qualitative research paper is an analysis of the quality of the material between your hands. It allows you to understand the reliability of the research and the construction of the paper[1].

Characteristics of qualitative research[2]:

  • Explores the meaning
  • Achkowleges the researcher’s point of view (reflexivity)
  • Interpretative methods of analysis
  • Iterative process
  • Contextual: concerned with the individual's perspective
  • Inductive method of inquiry

Critically Appraise Qualitative Research[edit | edit source]

Techniques for imposing rigor[2]:

  • Triangulation: is defined as the use of varied methods, data sources, and multiple researchers
  • Reflexivity: refers to the position of the researcher/s in relation to the research, their interaction with participants
  • Multiple coding: the use of independent researchers, calculate inter-rater reliability and idea generation
  • Respondent validation: involving participants to give their opinion and interpretations to provide an overview and generate further data
  • Deviant case analysis: most relevant in the ground theory. Exploring participants who might seem to be deviant from the norm

Edwards et al (2002)

describes the use of a “signal to noise” approach, where a balance is sought between the

methodological flaws of a study and the relevance of insights and findings it adds to the

overall synthesis. Other researchers do not acknowledge the value of critical appraisal of

qualitative research, stating that it stifles creativity (Dixon-Woods, 2004).

While

recognising that all these views have some basis for consideration certain approaches

succeed in positioning the qualitative research enterprise as one that can produce a valid,

reliable and objective contribution to evidence synthesis.

CASP checklist[edit | edit source]

CASP stands for the critical appraisal skills programme. The CASP offers free downloadable checklists that help in critiquing research papers.

Critical Appraisal Skills Programme (CASP) checklists are often used in health research and cover many research methods, including qualitative research. They are designed to prompt the reader to reflect on different aspects of a research paper and are typically structured around three core domains asking:

  1. Are the findings of the study valid?
  2. What are the findings?
  3. Will the results help locally? (i.e. in my setting)

It is formed of ten questions.

Section A, and it looks at the validity of the study results. This can be subjective as these studies don't have statistical significance instead there is a heavy reliance on the essence of the qualitative research by critically appraise and reflect on the methods and the design of the research.

Section B looks at the findings or the results of the study.

Section C is concerned with the applicability of the results.

Analysing the qualitative research, think about these questions:

  • Is there a clear statement of the aims of the study?
  • What are the goals of the study?
  • What was the rationale for this research?
  • How relevant is this?
  • Is the research question clearly formulated? Is it important?

A-The research introduction should give the context and reflect the importance of the research question leading up to the rationale of the research. It should also discuss the gap in the area that's been researched with the angle of focus. Previous research should also be discussed and a highlight on similarities and limitations to explain why this research should take place and it's significant to find answers,

B- Analysis of the qualitative research methods and the used approach. Is it appropriate for the questions? is a qualitative approach appropriate? Qualitative methods are used when illuminating some actions or subjective experiences of participants or looking to gain an in-depth understanding of a phenomenon.

C- Reflecting on the study design and whether it's appropriate for the research question or not. For example, using ethnography would be an appropriate design when studying a particular cultural group.

D- Analysing the recruitment strategy. For example, exploring the experience of rural dwelling, older adults in communicating with family members during the COVID pandemic. Recruiting participants through social media and using the snowballing technique is a good way to encourage a large sector of participants but using it a the sole recruitment strategy could dismiss or potentially exclude people who aren't good with using technology or those who haven't used technology to communicate with their families.

E- The sampling strategy. Are the inclusion and exclusion criteria clear and purposeful to recruit specific participants who the interesting subject of the research? Also, think if the sampling was probably quite inclusive and left any cohorts out?

F-The data collection method. Looking into the methods used in the research whether they were justified to meet the criteria of the research question and does it match the subject of research? are the methods varied enough to give participants choice? Also, analysing the way the authors used and documented the methods. For example, if interviews were used to collect data, we should be looking into was the interview structured? Was it semi-structured? Or was it unstructured? And why was that? so the details of the method and the rationale that justifies the use and the application of the method.

G-Data saturation. In qualitative research, particularly if it's interviews or focus groups, data saturation is the point at which new information or themes stop to emerge when the data set is saturated.

Data saturation and sample sizes in qualitative research are under-reported sometimes. Sample sizes, can often not be justified very well in qualitative research. So, for example, if it's an interpretive phenomenological analysis looking into in-depth analysis usually involves a smaller sample size.

If data saturation isn't mentioned in the qualitative paper, it could prompt to ask or query the validity of the results because I don't know for certain that there wouldn't have been any other themes that emerged or any contradicting viewpoints or sub-themes that could have emerged.

This could be challenging for researchers when writing the research protocol and determining the sample size at the stage when data saturation is still unknown at which number of participants and it needs to be monitored and reported.

H- Looking at the validity of the findings and any conflict of interest and the relationship between the researcher and the participants as this might influence the results. Looking at factors that might reflect bias in the viewpoints of the researcher, the data collection method that is known as reflexivity. Reflexivity is essentially self-awareness when a researcher reflects on their own position within the research and they consider their own biases. And importantly, when they make it fairly clear and explicit in the research paper of potential biases that they may have. COREQ is a checklist of the criteria for reporting qualitative research. It is for authors to check and tick off when writing qualitative research where there is a whole section dedicated to reflexivity and the research team and their relationships and their own self-awareness.

Reflexivity is self-awareness of one’s role in the research process, and how a researcher’s viewpoint may influence the process. It is unlikely that a researcher would remain completely neutral towards a topic with no opinion or viewpoint towards it at all. As a result, awareness and reflection of this thinking and an examination of our ways of doing is important to note in qualitative research. reflexivity is sometimes confused with reflection. Hibbert et al [3] offer a useful distinction between the two terms, suggesting that reflection is like a mirror image in which gives us the opportunity to observe and examine our ways of doing. Reflexivity, on the other hand, involves thinking about our experiences and questioning our ways of doing.

Section B is around the results. What are they? There are questions in the checklist to think about:

1-Ethical considerations. Mainly looking into the details of recruitment and involving participants and whether ethical standards were maintained and kept. For example, does it talk about informed consent? Do the authors talk about a participant information leaflet? Do they talk about an opportunity for participants to pose any questions? Also looking into whether the authors discussed any particular issues raised by the study, for example, the effects of the study and how they were handled particularly when a sensitive topic is explored. How did they handle participants becoming distressed or experienced discomfort?

2- Clarity and transparency of data analysis. The steps that were taken into the process and making sure they weren't biased. Looking into the in-depth description of the analysis process? Types of analysis,

The data that is presented, how was it selected from the original sample? To demonstrate the analysis process. Is there enough data to support the findings? So, for example, if we have a theme and I say that most men with prostate cancer reported fear of recurrence around their condition. But I only give one quote to support that. Is that enough? You usually need to have sufficient data, within reason. You also need to consider as the reader that qualitative papers can be quite lengthy, where the results section is quite vast often because you are really going in-depth and there's a lot of quotes and things. So, sometimes, it's possible that the authors mightn't have had enough space to put in lots and lots of quotes. So while we aren't looking for, you know, a quote from every single participant or most participants to support that theme, we do want, you know, two or three quotes, maybe that support our findings that we're claiming.

3- Contradicting data. There will often be some contradictory data that arises from the research and that needs to be recorded for clarity and transparency with the balance to discuss both sides.

4- Did the researchers consider other opinions without bias. a third party could be involved to eliminate the bias of research and reflect reflexivity

5-Clear statement about the results of the findings. The main findings should be discussed clearly in the discussion section with evidence both for and against their argument agreeing or contradicting previous literature. Do they critically analyse their findings in the context of different populations, different settings, in the context of research, practice, policy in the context of the evidence base that's out there?

6- The credibility of the findings. Credibility is one of four domains of Lincoln and Guba's evaluative criteria. The four criteria are credibility, transferability, dependability, and confirmability.

Lincoln and Guba[4] suggested that in order to evaluate the worth of a study, its trustworthiness needs to be established. Trustworthiness, they suggest, involves establishing credibility, transferability, dependability, and confirmability.

Credibility is how confident we are in the truth of the findings? One thing to think about is triangulation, for example, looking at different cohorts who have their own experiences of a phenomenon, but from a different viewpoint. An example, looking at facilitators and barriers to return to work after Breast Cancer diagnosis. Including different cohorts such as women who've had breast cancer, employers, healthcare professionals. partners or colleagues, etc.To get different viewpoints and that triangulation is what supports the findings and helps to come into a conclusion.

Triangulation can be also achieved by using a number of different methods.

Respondent validation. also known as member checking. Checking the analysis to ensure the conclusion is not just an analysis of your interview and ensures a fair representation of the findings.

7-How valuable is this research? Is it applicable or practical to apply on practice, research, policy, to a different population? Looking into the authors' discussion of the contribution of the study to the existing body of knowledge or how does the study contribute to our understanding? Are there new areas of research identified? And also did the author discuss whether findings can be transferred to other contexts or populations or settings. in other words, checking the transferability of the research.

the table is adapted from Cochrane library supplemental handbook guidance[5]

Aspect Qualitative Term Quantitative Term
Truth value Credibility Internal Validity
Applicability Transferability External Validity or generalisibility
Consistency Dependability Reliability
Neutrality Confirmability Objectivity

Assessing Credibility: Credibility evaluates whether or not the representation of

data fits the views of the participants studied, whether the findings hold true.

Evaluation techniques include: having outside auditors or participants validate

findings (member checks), peer debriefing, attention to negative cases,

independent analysis of data by more than one researcher, verbatim quotes,

persistent observation etc.

 Assessing Transferability: Transferability evaluates whether research findings

are transferable to other specific settings.

Evaluation techniques include: providing details of the study participants to

enable readers to evaluate for which target groups the study provides valuable

information, providing contextual background information, demographics, the

provision of thick description about both the sending and the receiving context

etc.

 Assessing Dependability: Dependability evaluates whether the process of

research is logical, traceable and clearly documented, particularly on the methods

chosen and the decisions made by the researchers.

Evaluation techniques include: peer review, debriefing, audit trails, triangulation

in the context of the use of different methodological approaches to look at the

topic of research, reflexivity to keep a self-critical account of the research process,

calculation of inter-rater agreements etc.

 Assessing Confirmability: Confirmability evaluates the extent to which findings

are qualitatively confirmable through the analysis being grounded in the data and

through examination of the audit trail.

Evaluation techniques include: assessing the effects of the researcher during all

steps of the research process, reflexivity, providing background information on

the researcher’s background, education, perspective, school of thought etc.

QARI software developed by the Joanna Briggs Institute, Australia

URL: http://www.joannabriggs.edu.au/services/sumari.php

Used by: Pearson A, Porritt KA, Doran D, Vincent L, Craig D, Tucker D,

Long L, Henstridge V. A comprehensive systematic review of evidence on

the structure, process, characteristics and composition of a nursing team

that fosters a healthy environment. International Journal of Evidence-

Based Healthcare 2006; 4(2): 118-59.

EPPI-reviewer developed by the EPPI Centre, United Kingdom

URL: http://eppi.ioe.ac.uk/eppireviewer/login.aspx

Used by: Bradley P, Nordheim L, De La Harpa D, Innvaer S & Thompson

C. A systematic review of qualitative literature on educational

interventions for evidence-based practice. Learning in Health & Social

Care 2005: 4(2):89-109.

Harden A, Brunton G, Fletcher A, Oakley A. Teenage pregnancy and

social disadvantage: a systematic review integrating trials and qualitative

studies. British Medical Journal Oct 2009.

Critical Appraisal Skills Programme (CASP):

http://www.phru.nhs.uk/Doc_Links/Qualitative%20Appraisal%20Tool.pdf

Used by: Kane GA et al. Parenting programmes: a systematic review and

synthesis of qualitative research. Child Care Health and Development

2007; 33(6): 784-793.

Modified versions of CASP, used by:

Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J.

Evaluating meta-ethnography: a synthesis of qualitative research on lay

experiences of diabetes and diabetes care. Social Science and Medicine

2003; 56: 671-84.

Malpass A, Shaw A, Sharp D, Walter F, Feder G, Ridd M, Kessler D.

‘Medication career" or "Moral career"? The two sides of managing

antidepressants: A meta-ethnography of patients' experience of

antidepressants. Soc Sci Med. 2009; 68(1):154-68.

Quality Framework UK Cabinet Office

http://www.gsr.gov.uk/downloads/evaluating_policy/a_quality_framework.pdf

Used by: MacEachen E et al. Systematic review of the qualitative

literature on return to work after injury. Scandinavian Journal of Work

Environment & Health 2006; 32(4): 257-269.

Evaluation Tool for Qualitative Studies

http://www.fhsc.salford.ac.uk/hcprdu/tools/qualitative.htm

Used by: McInnes RJ & Chambers JA. Supporting breastfeeding mothers:

qualitative synthesis. Journal of Advanced Nursing 2008; 62(4): 407-427.

Qualitative research[2]:

Concerned with nuances of meaning and in-depth

understanding

 Predominantly inductive method of enquiry, i.e. bottom-up

data-driven approach (Bryman, 2004, p.9)

 Contextual: Importance of understanding human experience

at an individual perspective (Denzin and Lincoln, 2008)

 Flexible research strategy: Commitment to iterative process

 Rich data: Depth versus breadth

 Small sample but high detail

 Analysis is descriptive and interpretative

 Researcher’s standpoint acknowledged and

questioned (reflexivity)

Principles of rigour

Credibility and authenticity

 Critical appraisal and integrity

How can the researcher remained faithful

to participants’ experience? (Hammersley, 1992)

Has the researcher explore alternative explanations,

discrepant data, examine bias & etc.?

(Graneheim and Lundman, 2004)

Techniques for imposing rigour

Triangulation Reflexivity

Multiple coding Respondent validation

Deviant case analysis

Techniques for imposing rigour

Triangulation

“The use of more than one method or source of

data in the study of a social phenomenon”

Types:

•Multiple methods

•Focus groups/ interviews/ journals/ observations

•Multiple data sources

•GPs/ patients/ carers/ social care staff

•Multiple researchers

•To get different observations on same situation

Techniques for imposing rigour

Reflexivity

•Position of researcher in research

•Interaction with participants

•Characteristics of researcher:

•e.g. age, gender, profession, relationship,

personal experience

•Consider what this might imply and try to

limit the effects

Multiple coding

•2 or 3 independent researchers

•Different disciplinary backgrounds

•Calculate inter-rater reliability

•Process rather than value

•Discuss and resolve discrepancies

•Generates ideas

Respondent validation

•Taking findings back to participants

•Getting their opinions on interpretations and

implications drawn from their interviews

•Incorporate this into analysis

•Must be aware of the limitations

•Researcher aiming to provide overview

•Generates further data to be interpreted

Deviant case analysis

•Process of exploring experiences of those

participants who appear to be ‘deviant’ from

responses of the norm

•Encourages the researcher to examine, question,

develop and refine the emergent theory further

•Most prevalent in grounded theory

Questions to ask (from Kuper et al, 2008a)

1. Was the sample used in the study

appropriate to its research question?

2. Were the data collected appropriately?

3. Were the data analysed appropriately?

4. Can I transfer the results of this study to my

own setting?

5. Does the study adequately address potential

ethical issues, including reflexivity?

6. Overall: is what the researchers did clear?

Was the sample used appropriate to its

research questions?

 How were participants selected and

recruited?

 Were they relevant to the research question?

 Was sampling strategy justified?

 Was the sampling purposive/ theoretical?

 Was it a convenience sample?

Were the data collected appropriately?

 Were the data collection

methods appropriate for

research objectives and

settings?

 Field observation

(participant/ non- participant)

 Interviews (in-depth; focus

groups)

 Document analysis (diaries;

letters; newspaper articles)

 Was there explicit

consideration of how this

might have influenced

findings?

Sample size dilemma?

 Not an issue of sample size

in statistical sense

 Data collection needs to be

comprehensive enough in

breadth and depth to

generate and support

interpretation

 Adequacy depends on

emerging findings

 Need for iterative process

 Can the data be audited?

 Paper trail

Were the data analysed appropriately?

 Transparency of analytical methods – especially for

interpretation

 Systematic approach should have been used

 Data-driven or theory-driven (e.g. thematic analysis;

grounded theory)

 Efforts made to describe contradictory data and

divergent findings

 Multiple coding – so findings are corroborated by

more than one researcher

 Do results look credible and justify conclusions?

Can I transfer results of this study to my

own setting?

 Qualitative research is contextual – i.e. not seeking to be

generalisable

 “Transferability” more important principle (Kuper et al.

2008a)

 How do these apply in other contexts/ situations/

real world?

 Onus for assessing transferability rests with reader

 Author has to describe setting and context

transparently and honestly

 Real-world implications for practice should be clearly

described, if not obvious

Does the study adequately address potential

ethical issues, including reflexivity?

Essentially, balancing moral actions of the researcher and virtues of

research aims and outcomes

Principles to follow:

1. Autonomy

2. Beneficence/non-maleficence

3. Justice

4. Misrepresentation

 In qualitative research, components of ethical research relationship

 Acknowledgement of bias

 Rigour

 Rapport and managing distress

 Respect for autonomy

 Confidentiality, especially in research reports

 Avoidance of exploitation (being aware of power relationships)

The importance of relevance

A form of transferability, ‘relevance’ refers to

emphasizing the value of the study and the wider

implications of it (Mays & Pope, 2000)

 “So what?”

 What contribution does the study make to existing

knowledge?

 Are the limitations thoughtfully discussed?

 How do the findings fit with existing theory?

 Does it contribute by developing new theory?

 What are the implications for practice/service?

 Has the study been disseminated responsibly?

Assessing qualitative research in mixed-

method studies

 Integrating qualitative and quantitative methods should be

done from conception, design to conclusion

 Often ensures triangulation

 Has study been adequately justified?

 Does it explain descriptive, reductionist quantitative

study

 Rationale presented for chosen methodology

 Sampling conducted appropriately?

 Ethical balance – power relationships considered?

 Recruiting participants from control group

 Have findings from both studies been presented jointly?

 Is it necessary?

CASP Quality checklist

Screening questions

1. Was there a clear statement of the aims of the research?

2. Is a qualitative methodology appropriate?

Detailed questions

3. Was the research design appropriate to address the aims of the

research?

4. Was the recruitment strategy appropriate to the aims of the research?

5. Were the data collected in a way that addressed the research issue?

6. Has the relationship between researcher and participants been

adequately considered?

7. Have ethical issues been taken into consideration?

8. Was the data analysis sufficiently rigorous?

9. Is there a clear statement of findings?

10. How valuable is the research?

Resources[edit | edit source]

References[edit | edit source]

  1. Algeo N. Interpreting a Qualitative Research Paper. Physioplus Course 2020
  2. 2.0 2.1 2.2 Samsi K. Critical appraisal of qualitative research. Lecture, from King’s College London. 2012.
  3. Hibbert P, Coupland C, MacIntosh R. Reflexivity: Recursion and relationality in organizational research processes. Qualitative Research in Organizations and Management: An International Journal. 2010 May 11.
  4. Lincoln, YS. & Guba, EG. Naturalistic Inquiry. Newbury Park, CA: Sage Publications. (1985).
  5. Hannes K. Chapter 4: Critical appraisal of qualitative research. In: Noyes J, Booth A, Hannes K, Harden A, Harris J, Lewin S, Lockwood C (editors), Supplementary Guidance for Inclusion of Qualitative Research in Cochrane Systematic Reviews of Interventions. Version 1 (updated August 2011). Cochrane Collaboration Qualitative Methods Group, 2011. Available from URL http://cqrmg.cochrane.org/supplemental-handbook-guidance