Western University of Health Sciences Institutional Research and Effectiveness - IRE Western University of Health Sciences
Institutional Research and Effectiveness - IRE

Assessment Plan

Overview

At Western University, we created a process that is not only effective, but also sustainable. Based in large part on lessons learned from the WASC Assessment Leadership Academy as well as WASC conferences, an assessment plan was created with three basic attributes:

Each year in March, the Director of IRE and the Senior Assessment Analyst meet with program representatives to discuss that year’s assessment plan and answer any questions they may have. Program representatives may come from curriculum committees, assessment committees, or some other ad hoc group chosen to lead the process. From that point, programs have approximately four months to complete assessment reports.

Reports are submitted to the Director of IRE in July, who then distributes them among members of the Assessment/Program Review Committee for their review. Two committee members review each report, making sure that no one evaluates a report from his or her own program. To help guide feedback, committee members utilize a feedback form and an assessment evaluation rubric that describes expectations for each section of the assessment report.

Once the feedback process is completed, the Senior Assessment Analyst reviews each feedback form and assembles individual feedback reports for all programs. As a supplement, the Senior Assessment Analyst also creates a meta-report, which is shared with executives such as the Provost and the college Deans.

The four-year plan allows programs to assess two Institutional Learning Outcomes per year, thereby allowing for all eight Institutional Learning Outcomes to be assessed at the conclusion of the 4th year. At the conclusion of the 4th year, the Institutional Learning Outcomes will be reassessed the following four years for evaluative and meaningful comparisons. This process allows for programs to gain information as to the utility of their data collection procedures, the structure of their curriculum, and for the monitoring of student achievement.

Table 1.WesternU Assessment Schedule

Phase Year Institutional Learning Outcomes
1 2012-13 Evidence based practice Interpersonal communication skills
2 2013-14 Critical thinking Collaboration skills
3 2014-15 Breadth and depth of knowledge in the discipline/Clinical competence Ethical and moral decision making skills
4 2015-16 Life-long learning Humanistic practice

 

Assessment Loop

The Assessment loop, consists of the entire assessment process. First, a program must identify PLO’s that aligns with the ILO for that assessment year. Once this is completed the Annual Plan must be developed. Then the program can move on to collect and analyze the evidence. The next step is to review and discuss the results with others in the program, such as administration, faculty, staff and students. Next, the program should make improvements based on the results.

 

WesternU Institutional Assessment Process

 

Milestone

Details

Planning and Preparation Kickoff meeting; discuss ILOs to be assessed; Discuss alignment with professional learning outcomes; Discuss evidence to be used.
Data Collection Programs collect data to be used for assessment report; programs may schedule follow-up meeting with IRE to discuss assessment data and methodology strategies.
Section I: Progress Report (draft)
Section II: Institutional Learning Outcome & Program Learning Outcome Alignment (draft)
Section III: Methodology, Assessment Goals, & Participation (draft)
Programs are urged to submit drafts of the first three sections of the report to IRE in May. IRE will provide feedback to all participating programs.
Section IV: Results (draft) Programs are urged to submit drafts of the fourth section of the report to IRE in June. IRE will provide feedback to all participating programs.
FINAL Assessment Report Due July 31, 2014
Internal Review Systematic review of reports by assessment committee using feedback form and rubric; Creation of formal feedback reports by Senior Assessment Analyst.
Assessment Committee Review of Reports Reviews reports in August
Program Feedback Meetings with all programs to discuss feedback; Discussion with programs about action plans for future improvement.  Presentation of reports at the Deans’ Council. 
Distribution of Feedback Emailed to program by IRE in October
Meetings of Understanding Meeting with IRE & Program in Dec.-Jan.
Report to Provost IRE meets with provost in February
Deans’ Council Presentation IRE presents in March
Annual Follow-up Programs include an update on the previous year’s institutional learning outcomes on each assessment report.

 

 

Assessment Process Flowchart

 

Closing The Assessment Loop

Many times, results are not used or even shared with others in the department.  They may not be used because there may not be knowledge of how to use them.  Assessment is not done to point out the flaws in the program; rather, assessment is done to continually improve and progress in an evidence-based manner.  Table below, is offered as a self-check for programs. The table contains a list of questions and actions to move forward once assessment results are completed to close the assessment loop, this list is not meant to be all-inclusive.

Closing the assessment loop self-evaluation

#

Closing the Loop: questions & actions

1 What do the findings tell us?  
2 What is the next step?  
3 What have we learned about our assessment process? (What can be improved?)  
  Curriculum-related actions  
4 Revise course content  
5 Change/add assignments  
6 Change how courses are taught  
7 Revise course prerequisites  
8 Modify frequency or schedule of course offerings  
  Resource-related actions  
9 Hire or re-assign faculty and/or staff  
10 Increase classroom space  
11 Additional staff and/or faculty development opportunities  
12 Improve use of technology  
13 Work with other units on campus (eg. IRE, CAPE, Library, etc) to assist in improving student learning  
  Academic process actions  
14 Revise advising standards or processes  
15 Revise admission criteria  
16 Share results with faculty, staff and students regularly  
  Program promotion actions  
17 Communicate student work to stakeholders (ex: brochures, website, etc.)  

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Adapted from the University of Hawaii Manoa (manoa.hawaii.edu/assessment)

 

Curriculum Map

A curriculum map is a table with one column for each program learning outcome (PLO) and one row for each course.  In addition the program can include the institutional learning outcomes (ILO) that align with each PLO as seen in the table below:

Courses ILO1 ILO 2 ILO 3 ILO 4
PLO1 PLO2 PLO3 PLO4 PLO5 PLO6 PLO7 PLO8
101 I   I   I   I  
102 D I D   D I   I
103   D   I M, A   D D
104 M, A   M, A D   D   M, A
105   M, A   M, A   M, A M, A  

 

 

 

 

 

 

 

 

 

In the table I means introduced, D means developed, M is mastery at level appropriate for graduation and A means assessment evidence is collected.  The table above is an example of a program that only contains five courses.  It is meant to only be an example of how a table should look, all courses in a program should be included in a real table.

 

Direct vs. Indirect Assessment Methods

An assessment method is the means for measuring the degree of success that a program has achieved in meeting a program outcome that aligns with an institutional learning outcome. More than one assessment method should be used. A minimum of one direct and one indirect method is required.

1)      Direct methods measure what was learned or accomplished. This is the direct evidence of student performance that relies upon the direct examination of that performance.  Direct evidence can be thought of as student product or behavior that reveals what students know and can do.

2)      Indirect methods measure perceptions of learning or what should have been learned as measured by surveys or other means. This requires that faculty infer about student knowledge rather than observe it.  Indirect measures can reveal why or how students learn.

 

Direct Assessment Methods

(Definitions and Examples)

Potential Strengths

Potential Limitations

Embedded Assignments and Course Activities

Embedded assessment techniques utilize existing student course work as both a grading instrument as well as data in the assessment of Program Learning Outcomes (PLOs).

 

(eg.)

-          Preceptor evaluation of students

-          Panel discussion

-          Didactic presentation

-          Capstone courses

-          Portfolios

-          Senior research project

 

  • Students are motivated to do well because assignment is part of their course grade.
  • Online submission and review of materials is possible.
  • Data collection is unobtrusive to students.
  • It provides a sophisticated, multi-level view of student.
  • Authentic assessment of learning objectives can be obtained.
  • It can involve ratings by fieldwork supervisor.
  • It can be used for grading as well as assessment.
  • Faculty who develops the procedures is likely to be interested in results and willing to use them.
  • It is time-consuming to develop and coordinate evaluation methods.
  • Reliability and validity are unknown.
  • Norms generally are not available.

Published test

The standardized tests developed by outside are used by programs to assess general knowledge in a discipline.

 

(eg.)

-          Objective structure clinical exam (OSCE)

-          Licensing or certification exams

  • National comparisons can be made.
  • Reliability and validity are monitored by the test developers.
  • An external organization handles test administration and evaluation.
  • A number of norm groups, such as norms of community colleges, liberal arts colleges, and comprehensive universities, are provided.
  • Online versions of tests are increasingly available, and some provide immediate scoring.
  • Some publishers allow faculty to supplement tests with their own items, so tests can be adapted to better serve local needs.
  • Test may not be aligned with the PLOs.
  • Information from test results is too broad to be used for decision making.
  • Test can be expensive.
  • Most published tests rely heavily on multiple-choice items that often focus on specific facts, but PLOs more often emphasize higher-level skills.
  • Students may not be motivated to do well if test results have no impact on their lives.
  • The marginal gain from annual testing may be low.
  • Faculty may object to standardized exam scores on general principles, leading them to ignore results.

Locally developed tests

A test is developed within the institution to be used internally.  It is typically administered to a representative sample in order to develop local norms and standards.

 

(eg.)

-          Final exams

-          Common exams

-          Internship Evaluation

-          Oral final exams

-          Pre-clinical examination

  • Appropriate mixes of items allow faculty to address various types of learning objectives.
  • Authentic assessment of higher-level learning can be obtained.
  • Students generally are motivated to display the extent of their learning.
  • If it is well constructed, they are likely to have good validity.
  • Because local faculty write the exam, they are likely to be interested in results and willing to use them.
  • Test can be integrated into routine faculty workloads.
  • Campuses with similar missions could decide to develop their own norms, and they could assess student work together or provide independent assessment of each other’s student work.
  • Discussion of results focuses faculty on student learning and program support for it.
  • Campus or program is responsible for test reliability, validity, and evaluation.
  • These exams are likely to be less reliable than published exams.
  • It is time-consuming to score the test.
  • Creating effective exams requires time and skills.
  • Traditional testing methods, such as multiple-choice items, may not provide authentic measurement.
  • Norms generally are not available.

Case studies

Contrived situations, often based on real situations or facts, permit students to apply and demonstrate their skills and knowledge in predetermined situations.

 

(eg.)

-          Standardized patient evaluation

  • Test provides a practical and in-depth way to observe (measure) acquired skills, knowledge, attitudes, etc.
  • Case studies are flexible applications to specific PLOs.
  • Case studies are often fun for the students.
  • Case studies may easily become unproductive “play”.
  • Case studies may cost.
  • It is time-consuming to plan, implement and analyze.

Students’ performance

Students’ participations are evaluated in campus and/or community events, volunteer work, presentations, clinical, internships, musical or art performances, etc.

 

(eg.)

-          Evaluations of interns

-          Service learning evaluation

-          Preceptor evaluation of students

-          Student presentation of research to a  forum/professional organizations

  • Evaluation by a career professional is often highly valued by students.
  • Faculty members can learn what is expected by community members.
  • Lack of standardization across evaluations may make the analysis difficult.

Culminating experiences

Students produce works that show their cumulative experiences in a program. Capstones provide a means to assess student achievement across a discipline.

(eg.)

-          Capstone courses

-          Senior research projects

-          Portfolios

  • Culminating experiences through capstone courses or senior research project provide a sophisticated, multi-level view of student achievement.
  • Students have the opportunity to integrate their learning.
  • Creating an effective, comprehensive culminating experience can be challenging.
  • It is time-consuming to develop evaluation methods (multiple rubrics may be needed).

Collection of work samples

Students’ work collected throughout a program is assessed using a scoring guide/rubric. Portfolios may contain research papers, reports, tests, exams, case studies, video, personal essays, journals, self-evaluation, exercises, etc.

 

(eg.)

-          Portfolios

 

  • Portfolios provide a comprehensive, holistic view of student achievement and/or development over time.
  • Students can use portfolios for preparation for graduate school or employment.
  • Online submission and review of materials are possible.
  • Students may become more aware of their own academic growth.
  • Students are not required to do extra work if it is a course assignment.
  • Discussion of results focuses faculty on student learning and program support for it.
  • It may be costly and time-consuming for both students and faculty.
  • Students may not take the process seriously (collection, reflection, etc.)
  • Accommodations need to be made for transfer students for longitudinal or developmental portfolios.
  • It may be difficult to protect student confidentiality and privacy.
  • Students may refrain from criticizing the program if their portfolio is graded or if their names will be associated with portfolios during the review.

Pre- and post-measures

An exam is administered at the beginning and at the end of a course or program in order to determine the progress of student learning.

  • It provides “value-added” or growth information.
  • It increases workload to evaluate students more than once.
  • Designing pre and post tests that are truly comparable at different times is difficult.
  • Statistician may be needed to properly analyze results.

Grading using scoring rubrics

Rubrics guide outline identified criteria for successfully completing an assignment. They can be used to score everything from essays to performances.

  • Clearer learning targets, instructional design and delivery are obtained.
  • Rubrics make the assessment process more accurate and fair.
  • Rubrics can provide students with a tool for self-assessment and peer feedback.
  • Rubrics have the potential to advance the learning of students of color, first generation students, and those from non-traditional settings.
  • It is time-consuming to design meaningful rubrics.
  • If poorly designed, they can diminish the learning process.

 

 

Indirect Assessment Methods

Potential Strengths

Potential Limitations

Surveys

A mailed, email, telephone, or website questionnaire used to acquire feedback from individuals, and to seek to measure students’ attitudes and opinions related to their education.

 

(eg.)

-          Student self-evaluation surveys

-          Graduating student surveys

-          Student perception of learning surveys

-          Alumni surveys

-          Employer surveys

-          Advisory perception survey

-          General faculty survey

-          Student evaluation of rotation   experience

-          Student evaluation of faculty

  • Surveys are flexible in format, such as paper/pencil, telephone, and website.
  • Many issues can be included in questions.
  • Surveys can be administered to large groups of respondents, including people at distant sites, with a relatively low cost.
  • Survey questions generally have a clear relationship to the objectives being assessed.
  • Surveys can be conducted relatively quickly.
  • Responses to closed-ended questions are easy to analyze, tabulate and to report in tables or graphs.
  • Open-ended questions allow faculty to uncover unanticipated results.
  • Tracking opinions across time is possible to explore trends.
  • Their validity and reliability depend on the quality of the questions and response options.
  • Conclusions can be inaccurate if biased samples are obtained.
  • Low response rates are typical.
  • Results might not include the full array of opinions if the sample is small.
  • Students’ perception may be inconsistent with their actual ability or behavior.
  • Open-ended responses are time-consuming to analyze.
  • Survey results could be a property of individual faculty members.

Interviews

Interviews are conducted with individual students, structured with open or closed-ended questions or completely open-ended without pre-determined questions.

 

(eg.)

-          Exit interviews

-          Consultation with internship supervisors

-          Consultation with advisory board/counsel

  • Students’ insights on their beliefs, attitudes, and experiences can be obtained.
  • Interviewers can conduct follow-up questions to gain more detailed responses.
  • Interviewers can respond to questions and clarify misunderstandings.
  • Telephone interviews can be used to reach distant students.
  • A sense of immediacy and personal attention for students can be provided.
  • Open-ended questions allow faculty to uncover unanticipated results.
  • Rich, in-depth information can be obtained.
  • Students’ narratives and voices can be powerful evidence.
  • Their validity depends on the quality of the questions.
  • Poor interviewer skills can generate limited or useless information.
  • It is difficult to obtain a representative sample of respondents.
  • Students’ perception may be inconsistent with their real ability or behavior.
  • The process may intimidate some students, especially if asked about sensitive issues and their identity is known to the interviewer.
  • Transcribing, analyzing, and reporting of interview data are time-consuming.

Focus groups

Structured discussions are conducted with students. Students are asked a series of open-ended questions designed to collect data about belief, attitudes, and experiences.

  • The questions generally have a clear relationship to the outcomes being assessed.
  • Focus group can be combined with other techniques, such as surveys.
  • The process allows faculty to uncover anticipated results.
  • Students’ insights on their beliefs, attitudes, and experiences can be obtained.
  • Focus groups can be conducted within courses.
  • Students have the opportunity to react to each other’s ideas, providing an opportunity to uncover the degree of consensus on ideas that emerge during the discussion.
  • Rich, in depth information can be obtained.
  • Tailored follow-up questions can be conducted.
  • The group dynamic may spark more information.
  • Students’ narratives and voices can be powerful evidence.
  • Results might not include the full array of opinions if only one focus group is conducted.
  • Students’ perception may be inconsistent with their real ability or behavior.
  • Recruiting and scheduling the groups can be difficult.
  • Trained facilitators are needed.
  • Collecting, transcribing, analyzing, and reporting data are time-consuming.

Institutional Data

Program and student data is collected at the institutional level.

 

(eg.)

-          Graduation rates

-          Time to degree

-          Retention rates

-          Persistence/return rates

-          Job placement rates

  • Institutional data can be effective when linked to other performance measures.
  • Institutional data satisfies some accreditation agencies’ reporting requirements.
  • It is a source of information, not evaluating PLOs.

 

 Adapted from:

Allen, M. J.  (2004)  Assessing Academic Programs in Higher Education.  Bolton, MA: Sage.
http://manoa.hawaii.edu/assessment/howto/methods.htm
https://insidecbu.calbaptist.edu/ICS/icsfs/Assessment_Methods.pdf?target=2bc703ff-bb7e-4693-bb83-fc8e94ff88c4
http://www.sunyorange.edu/assessmentapa/docs/AssessmentMETHODS.pdf
http://www.uncw.edu/cte/et/articles/vol7_1/wolf.pdf