Western University of Health Sciences Institutional Research and Effectiveness - IRE Western University of Health Sciences
Institutional Research and Effectiveness - IRE


Glossary of Assessment Terms *


Accreditation Certification that programs or institutions have appropriate infrastructures, policies, and services to support their operations and that they are accomplishing their mission.
Anonymity Data elements cannot be associated with individual respondents.
Assessment The collection and use of evidence to improve a product or process.
Authentic assessment The assessment process is similar to or embedded in relevant real-world activities.
Benchmark A criterion for assessing results compared to an empirically developed standard.
Bloom’s taxonomy A popular scheme for defining depth of processing.
Classroom assessment Assessment to improve the teaching of specific courses and segments of courses.
Close the loop Faculty discuss assessment results, reach conclusions about their meaning, determine implications for change, and implement them.
Confidentiality The person who conducts the assessment study is aware of who participated, but does not disclose this information.
Construct validity  A form of validity based on testing predictions made using theory or construct underlying the procedure.
Content analysis Summarizing a set of communications by analyzing common themes and highlighting important issues.
Criterion-related validity How well results predict a phenomenon of interest.
Data ownership Who has control over the assessment data – who has the right to see the data or allow others to see them?
Deep learning Learning which makes knowledge personal and relevant to real-world applications.
Demographic characteristics Individual characteristics such as age and sex.
Developmental assessment Repeated assessment information on individual students is used to track, verify, and support student development.
Developmental portfolio A portfolio designed to show student progress by comparing products from early and late stages of the student’s academic career.
Direct measure Students demonstrate that they have achieved a learning objective.
Educational effectiveness How well a program or institution promotes student development.
Embedded assessment Assessment activities occur in courses. Students generally are graded on this work, and some or all of it also is used to assess program learning objectives.
Face validity A form of validity determined by subjective evaluation by test takers or by experts in what is being assessed.
Focus groups Planned discussion among groups of participants who are asked a series of carefully constructed questions about their beliefs, attitudes, and experiences.
Formative assessment Assessment designed to give feedback to improve what is being assessed.
Formative validity How well an assessment procedure provides information that is useful for improving what is being assessed.
Generalizable results Results that accurately represent the population that was sampled.
Goals General statements about knowledge, skills, attitudes, and values of expected graduates.
Halo effect A problem that occurs when judgments are influenced by each other.
Holistic rubric A rubric that involves one global, holistic judgment.
Indirect measure Students (or others) report perceptions of how well students have achieved an objective.
Inter-rater reliability How well two or more raters agree when decisions are based on subjective judgments.
Learning objective A clear, concise statement that describes how students can demonstrate their mastery of a program goal.
Likert scale A survey format that asks respondents to indicate their degree of agreement. Respondents generally range from “strongly disagree” to “strongly agree.”
Mission A holistic vision of the values and philosophy of a program, department, or institution.
Norms/norm group Results that are used to interpret the relative performance of others; for example, test results might be compared to norms based on samples of college freshman or college graduates.
Objectivity Faculty have an unbiased attitude throughout the assessment process, including gathering evidence, interpreting evidence, and reporting the results.
Open-ended question A question which invites respondents to generate long replies, rather than just a word or two.
Percentage of agreement An indicator of inter-rater reliability.
Performance measure Students exhibit how well they have achieved an objective by doing it, such as a piano recital.
Pilot study An abbreviated study to test procedures before the full study is implemented.
Portfolio Compilation of student work. Students often are required to reflect on their achievement of learning objectives and how the presented evidence supports their conclusions.
Program assessment An ongoing process designed to monitor and improve student learning. Faculty develop explicit statements of what students should learn, verify that the program is designed to foster this learning, collect empirical data that indicate student attainment, and use these data to improve student learning.
Purposeful sample A sample created using predetermined criteria, such as proportional representation of students at each class level.
Qualitative assessment Assessment findings are verbal descriptions of what was discovered, rather than numerical scores.
Quantitative assessment Assessment findings are summarized with a number that indicates the extent of learning.
Recall item A test item that requires students to generate the answer on their own, rather than to identify the answer in a provided list.
Recognition item A test item that requires students to identify the answer in a provided list.
Reflective essays Respondents are asked to write essays on personal perspectives and experiences.
Reliability The degree of measurement precision and stability for a test or assessment procedure.
Representative sample An unbiased sample that adequately represents the population from which the sample is drawn.
Response rate The proportion of contacted individuals who respond to a request.
Rubric An explicit scheme for classifying products or behaviors into categories that are steps along a continuum.
Standardized test A test which is administered to all test takers under identical conditions.
Structured interview Interviewers ask the same questions of each person being interviewed.
Summative assessment Assessment is designed to provide an evaluative summary.
Summative validity Assessment accurately evaluates what is being assessed.
Surface learning Learning based on memorization of facts without deep understanding of what is learned.
Survey A questionnaire that collects information about beliefs, experiences, or attitudes.
Traditional measure Students exhibit how well they have achieved an objective by taking traditional tests, such as multiple-choice tests.
Triangulation Multiple lines of evidence lead to the same conclusion.
Unstructured interview Interviewers are allowed to vary their questions across interviewees.
Validity How well a procedure assesses what is supposed to be assessing.
Value-added assessment Student learning is demonstrated by determining how much students have gained through participation in the program.

* Note: The definitions listed below were taken from Mary J. Allen’s (2004) book Assessing Academic Programs in Higher Education