At Western University, we created a process that is not only effective, but also sustainable. Based in large part on lessons learned from the WASC Assessment Leadership Academy as well as WASC conferences, an assessment plan was created with three basic attributes:
- Assessment of Western University’s Institutional Learning Outcomes will occur on a four-year cycle, with programs submitting an assessment report on two outcomes per year.
- Direct assessment evidence should come from signature assignments, which are comprehensive and representative instances of student work related to one or more outcomes.
- Programs will operationalize assessments using program learning outcomes to maximize the meaning of their work.
Each year in March, the Director of IRE and the Senior Assessment Analyst meet with program representatives to discuss that year’s assessment plan and answer any questions they may have. Program representatives may come from curriculum committees, assessment committees, or some other ad hoc group chosen to lead the process. From that point, programs have approximately four months to complete assessment reports.
Reports are submitted to the Director of IRE in July, who then distributes them among members of the Assessment/Program Review Committee for their review. Two committee members review each report, making sure that no one evaluates a report from his or her own program. To help guide feedback, committee members utilize a feedback form and an assessment evaluation rubric that describes expectations for each section of the assessment report.
Once the feedback process is completed, the Senior Assessment Analyst reviews each feedback form and assembles individual feedback reports for all programs. As a supplement, the Senior Assessment Analyst also creates a meta-report, which is shared with executives such as the Provost and the college Deans.
WesternU Institutional Assessment Process
|Planning and Preparation||Kickoff meeting; discuss ILOs to be assessed; Discuss alignment with professional learning outcomes; Discuss evidence to be used.|
|Data Collection||Programs collect data to be used for assessment report; programs may schedule follow-up meeting with IRE to discuss assessment data and methodology strategies.|
|Section I: Progress Report (draft)|
Section II: Institutional Learning Outcome & Program Learning Outcome Alignment (draft)
Section III: Methodology, Assessment Goals, & Participation (draft)
|Programs are urged to submit drafts of the first three sections of the report to IRE in May. IRE will provide feedback to all participating programs.|
|Section IV: Results (draft)||Programs are urged to submit drafts of the fourth section of the report to IRE in June. IRE will provide feedback to all participating programs.|
|FINAL Assessment Report Due||July 31, 2015|
|Internal Review||Systematic review of reports by assessment committee using feedback form and rubric; Creation of formal feedback reports by Senior Assessment Analyst.|
|Assessment Committee Review of Reports||Reviews reports in August|
|Program Feedback||Meetings with all programs to discuss feedback; Discussion with programs about action plans for future improvement. Presentation of reports at the Deans’ Council.|
|Distribution of Feedback||Emailed to program by IRE in October|
|Meetings of Understanding||Meeting with IRE & Program in Dec.-Jan.|
|Report to Provost||IRE meets with provost in February|
|Deans’ Council Presentation||IRE presents in March|
|Annual Follow-up||Programs include an update on the previous year’s institutional learning outcomes on each assessment report.|
Assessment Process Flowchart
The four-year plan allows programs to assess two Institutional Learning Outcomes per year, thereby allowing for all eight Institutional Learning Outcomes to be assessed at the conclusion of the 4th year. At the conclusion of the 4th year, the Institutional Learning Outcomes will be reassessed the following four years for evaluative and meaningful comparisons. This process allows for programs to gain information as to the utility of their data collection procedures, the structure of their curriculum, and for the monitoring of student achievement.
The Four-year Plan
|Phase||Year||Institutional Learning Outcomes|
|1||2012-13||Evidence based practice||Interpersonal communication skills|
|2||2013-14||Critical thinking||Collaboration skills|
|3||2014-15||Breadth and depth of knowledge in the discipline/Clinical competence||Ethical and moral decision making skills|
|4||2015-16||Life-long learning||Humanistic practice|
The Assessment loop, consists of the entire assessment process. First, a program must identify PLO’s that aligns with the ILO for that assessment year. Once this is completed the Annual Plan must be developed. Then the program can move on to collect and analyze the evidence. The next step is to review and discuss the results with others in the program, such as administration, faculty, staff and students. Next, the program should make improvements based on the results.
Closing The Assessment Loop
Many times, results are not used or even shared with others in the department. They may not be used because there may not be knowledge of how to use them. Assessment is not done to point out the flaws in the program; rather, assessment is done to continually improve and progress in an evidence-based manner. Table below, is offered as a self-check for programs. The table contains a list of questions and actions to move forward once assessment results are completed to close the assessment loop, this list is not meant to be all-inclusive.
Closing the assessment loop self-evaluation
Closing the Loop: questions & actions
|1||What do the findings tell us?|
|2||What is the next step?|
|3||What have we learned about our assessment process? (What can be improved?)|
|4||Revise course content|
|6||Change how courses are taught|
|7||Revise course prerequisites|
|8||Modify frequency or schedule of course offerings|
|9||Hire or re-assign faculty and/or staff|
|10||Increase classroom space|
|11||Additional staff and/or faculty development opportunities|
|12||Improve use of technology|
|13||Work with other units on campus (eg. IRE, CAPE, Library, etc) to assist in improving student learning|
|Academic process actions|
|14||Revise advising standards or processes|
|15||Revise admission criteria|
|16||Share results with faculty, staff and students regularly|
|Program promotion actions|
|17||Communicate student work to stakeholders (ex: brochures, website, etc.)|
Adapted from the University of Hawaii Manoa (manoa.hawaii.edu/assessment)
A curriculum map is a table with one column for each program learning outcome (PLO) and one row for each course. In addition the program can include the institutional learning outcomes (ILO) that align with each PLO as seen in the table below:
|Courses||ILO1||ILO 2||ILO 3||ILO 4|
|104||M, A||M, A||D||D||M, A|
|105||M, A||M, A||M, A||M, A|
In the table I means introduced, D means developed, M is mastery at level appropriate for graduation and A means assessment evidence is collected. The table above is an example of a program that only contains five courses. It is meant to only be an example of how a table should look, all courses in a program should be included in a real table.
Direct vs. Indirect Assessment Methods
An assessment method is the means for measuring the degree of success that a program has achieved in meeting a program outcome that aligns with an institutional learning outcome. More than one assessment method should be used. A minimum of one direct and one indirect method is required.
1) Direct methods measure what was learned or accomplished. This is the direct evidence of student performance that relies upon the direct examination of that performance. Direct evidence can be thought of as student product or behavior that reveals what students know and can do.
2) Indirect methods measure perceptions of learning or what should have been learned as measured by surveys or other means. This requires that faculty infer about student knowledge rather than observe it. Indirect measures can reveal why or how students learn.
Direct Assessment Methods
(Definitions and Examples)
Embedded Assignments and Course Activities
Embedded assessment techniques utilize existing student course work as both a grading instrument as well as data in the assessment of Program Learning Outcomes (PLOs).
– Preceptor evaluation of students
– Panel discussion
– Didactic presentation
– Capstone courses
– Senior research project
The standardized tests developed by outside are used by programs to assess general knowledge in a discipline.
– Objective structure clinical exam (OSCE)
– Licensing or certification exams
Locally developed tests
A test is developed within the institution to be used internally. It is typically administered to a representative sample in order to develop local norms and standards.
– Final exams
– Common exams
– Internship Evaluation
– Oral final exams
– Pre-clinical examination
Contrived situations, often based on real situations or facts, permit students to apply and demonstrate their skills and knowledge in predetermined situations.
– Standardized patient evaluation
Students’ participations are evaluated in campus and/or community events, volunteer work, presentations, clinical, internships, musical or art performances, etc.
– Evaluations of interns
– Service learning evaluation
– Preceptor evaluation of students
– Student presentation of research to a forum/professional organizations
Students produce works that show their cumulative experiences in a program. Capstones provide a means to assess student achievement across a discipline.
– Capstone courses
– Senior research projects
Collection of work samples
Students’ work collected throughout a program is assessed using a scoring guide/rubric. Portfolios may contain research papers, reports, tests, exams, case studies, video, personal essays, journals, self-evaluation, exercises, etc.
Pre- and post-measures
An exam is administered at the beginning and at the end of a course or program in order to determine the progress of student learning.
Grading using scoring rubrics
Rubrics guide outline identified criteria for successfully completing an assignment. They can be used to score everything from essays to performances.
Indirect Assessment Methods
A mailed, email, telephone, or website questionnaire used to acquire feedback from individuals, and to seek to measure students’ attitudes and opinions related to their education.
– Student self-evaluation surveys
– Graduating student surveys
– Student perception of learning surveys
– Alumni surveys
– Employer surveys
– Advisory perception survey
– General faculty survey
– Student evaluation of rotation experience
– Student evaluation of faculty
Interviews are conducted with individual students, structured with open or closed-ended questions or completely open-ended without pre-determined questions.
– Exit interviews
– Consultation with internship supervisors
– Consultation with advisory board/counsel
Structured discussions are conducted with students. Students are asked a series of open-ended questions designed to collect data about belief, attitudes, and experiences.
Program and student data is collected at the institutional level.
– Graduation rates
– Time to degree
– Retention rates
– Persistence/return rates
– Job placement rates
Allen, M. J. (2004) Assessing Academic Programs in Higher Education. Bolton, MA: Sage.