Programme of Study: Strategic Assessment Plan

Original Editor - Stacy Schiurring based on the course by Larisa Hoffman

Top Contributors - Stacy Schiurring

Introduction[edit | edit source]

Professional programmes are expected to perform a self-evaluation on the success of the programme.  This is often done using a quality improvement process: [1]

  1. Planning
  2. Doing
  3. Studying
  4. Acting


During the planning phase, the philosophy, mission, and vision statements are revised, and an assessment plan is created.[1]

Strategic Assessment Plan[edit | edit source]

A strategic assessment plan collects data to determine the effectiveness of the programme in implementing the mission and achieving programme outcomes.  The purpose of the plan is to identify the strengths and opportunities using objective data.  The data can guide specific recommendations to enhance the programme, and create programmatic revisions (such as faculty development or modifying clinical experiences).[2]

Common sources of evidence used in assessment include:  (1) behavioural observations, (2) exit interviews, (3) external examiners, (4) focus groups, (5) locally developed written or oral performance examinations, (5) performance appraisals or simulated patient experiences, (6) portfolios, surveys or questionnaires, (7) standardized tests, or archival records.[1][3]  Careful selection of the evaluative method to provide evidence is important.[4]  The evidence should be appropriate, represent the best method for measurement, and demonstrate contemporary practice using real-life experiences.[4]

Evaluation of Strategic Assessment Plan Outcomes[edit | edit source]

The plan should evaluate programme outcomes, as well as, the processes using a contextual lens.[5] The plan should include a variety of assessment tools including summative and formative evaluation, as well as qualitative and quantitative measures.[6]

Summative measures assess the final outcome of the learning process, examining the effectiveness of the educational programme.[7]

  • Summative measures can be used to assess achievement of programme outcomes[7] and identify programmematic strengths and weaknesses or student progression.[6] 
  • Useful summative measures describe the effectiveness of the programme in achieving outcomes using feedback from different stakeholders and compared to similar programmes.[5]  


Formative evaluation examines the process of the instruction and provides feedback for improvement of the process.[7]

  • Formative measures can be used to inform procedures. 
  • Formative measures describe the effectiveness of the programme’s processes in terms of quality, efficiency, sustainability, and fiscal responsibility, using feedback from multiple stakeholders.[5]  
  • Useful data collection techniques to evaluate processes include observation, document review, and focused groups with faculty and students.[5]


The strategic assessment process should be grounded in local context.[5]  The level of achievement and competencies required should be based on international standards, but informed by requirements for the local healthcare worker (Frenk, 2010). 

  • Assessing local context should include description of the educational needs of the graduate, potential barriers, and identifying resources to address educational needs and barriers.[5]
  • Useful data collection techniques to perform a situational assessment include document review, interviews and focus groups, and surveys.[5]

Programme Attributes to Evaluate in a Strategic Assessment Plan[edit | edit source]

Characteristics that should be evaluated in a strategic assessment plan include:[4] 

  1. achievement of programme outcomes
  2. evidence that the course objectives in the curriculum are informed by international and local standards
  3. achievement of course objectives
  4. access to institutional resources
  5. effectiveness of the programme director

Programme Outcomes[edit | edit source]

Benchmarking using programme outcomes is one way to organise effectiveness of the programme.  Achievement of programme outcomes can be measured using qualitative techniques such as surveys and focused groups from different stakeholders including employers of alumni, alumni, and graduates of the programme.  

  • Surveys can measure the preparedness (well, sufficient or insufficiently prepared) of the graduate to meet the standards described in the programme outcomes. 
  • Focus groups can be used to gather additional data including programme strengths and opportunities considering the programme outcomes. 
  • Identifying themes through surveys and following up with focus groups can aid in decision making for the programme enhancement phase.  

Curriculum[edit | edit source]

The curriculum should balance competencies that are consistent with professional international standards[4] with local standards.[8]  The curriculum should be guided by an intentional plan, and evaluated based on the plan.  Please see this article for more information on the curriculum assessment plan. Data used to provide feedback on the curricular plan includes data from curriculum and clinical assessment, as well as advisory boards.  

Courses [edit | edit source]

Courses should be evaluated according to (1) student achievement of course objectives; (2) student and faculty feedback on pedagogical strategies, learning activities, and evaluative activities; and (3) faculty reflection on the course. 

Matching course objectives to the summative activity provides validity for the assessment.  Examples of summative evaluative activities that assess achievement of course objectives include (1) performance-based assessments, (2) assignments that include elements of clinical decision making, and (3) comprehensive examinations. 

  • Performance based assessments and written assignments can be evaluated with greater objectivity and consistency with the use of checklists or rubrics to measure changes in knowledge, skills, and attitudes in the learner.[9]
  • Simulated patients or guest participants can describe their comfort and perception of a student's confidence in performing a skill using an interview or checklist.[9]
  • Comprehensive or final examinations are often used to evaluate a student's didactic knowledge.  This is often done through multiple choice exams, verbal defense, or short answer written essays.  Multiple choice items should be evaluated according to their psychometric properties including the number of students who got the item correct and the item’s point biserial correlation.   The point biserial correlation is a measure of discrimination.  A positive biserial correlation on a testing item suggests that those who scored high on the total exam were more likely to select the correct answer on the item.    


Course evaluations can provide additional feedback opportunities for instructors to reflect on the outcome of the course.  Course evaluations from students and faculty colleagues can provide (1) feedback on the course organization, (2) clarity of learning objectives, (3) areas of emphasis, (4) quality of learning activities (problem-based learning, small group discussions online exercises), (5) clarity of expectations and (6) level of knowledge.[10]

It may be useful to organise the assessment plan in a table format organized according to the programme outcome, benchmark criteria, responsible party, time frame for data collection, method for data collection, summary of the data collected, and suggestions using the data.[4] A downloadable version of this table is available in the Resources Section.

Resources[edit | edit source]

Clinical Resources[edit | edit source]

Recommended Reading[edit | edit source]

References[edit | edit source]

  1. 1.0 1.1 1.2 Brown JF, Marshall BL. Continuous quality improvement: An effective strategy for improvement of program outcomes in a higher education setting. Nursing Education Perspectives. 2008 Jul 1;29(4):205-11.
  2. Arcario P, Polnariev BA. Closing the loop: How we better serve our students through a comprehensive assessment process. Metropolitan Universities. 2013 Jan 1;24(2):21-37.
  3. White B, McCarthy R. The development of a comprehensive assessment plan: One campus’ experience. Information Systems Education Journal. 2007;5(35):3-16.
  4. 4.0 4.1 4.2 4.3 4.4 Lewallen LP. Practical strategies for nursing education program evaluation. Journal of Professional Nursing. 2015 Mar 1;31(2):133-40.
  5. 5.0 5.1 5.2 5.3 5.4 5.5 5.6 Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE guide no. 67. Medical teacher. 2012 May 1;34(5):e288-99.
  6. 6.0 6.1 Wilkinson TJ, Hudson JN, McColl GJ, Hu WC, Jolly BC, Schuwirth LW. Medical school benchmarking–from tools to programmes. Medical teacher. 2015 Feb 1;37(2):146-52.
  7. 7.0 7.1 7.2 Bhat BA, Bhat GJ. Formative and summative evaluation techniques for improvement of learning process. European Journal of Business & Social Sciences. 2019 May;7(5):776-85.
  8. Hautz SC, Hautz WE, Feufel MA, Spies CD. Comparability of outcome frameworks in medical education: Implications for framework development. Medical Teacher. 2015 Nov 2;37(11):1051-9.
  9. 9.0 9.1 Durning SJ, Hemmer P, Pangaro LN. The structure of program evaluation: an approach for evaluating a course, clerkship, or components of a residency or fellowship training program. Teaching and learning in medicine. 2007 Jun 19;19(3):308-18.
  10. Karpa K, Abendroth CS. How we conduct ongoing programmatic evaluation of our medical education curriculum. Medical teacher. 2012 Oct 1;34(10):783-6.