Home

Institutional Research and Effectiveness - IRE

Glossary

Glossary of Assessment Terms *

AccreditationCertification that programs or institutions have appropriate infrastructures, policies, and services to support their operations and that they are accomplishing their mission.
AnonymityData elements cannot be associated with individual respondents.
AssessmentThe collection and use of evidence to improve a product or process.
Authentic assessmentThe assessment process is similar to or embedded in relevant real-world activities.
BenchmarkA criterion for assessing results compared to an empirically developed standard.
Bloom’s taxonomyA popular scheme for defining depth of processing.
Classroom assessmentAssessment to improve the teaching of specific courses and segments of courses.
Close the loopFaculty discuss assessment results, reach conclusions about their meaning, determine implications for change, and implement them.
ConfidentialityThe person who conducts the assessment study is aware of who participated, but does not disclose this information.
Construct validity A form of validity based on testing predictions made using theory or construct underlying the procedure.
Content analysisSummarizing a set of communications by analyzing common themes and highlighting important issues.
Criterion-related validityHow well results predict a phenomenon of interest.
Data ownershipWho has control over the assessment data – who has the right to see the data or allow others to see them?
Deep learningLearning which makes knowledge personal and relevant to real-world applications.
Demographic characteristicsIndividual characteristics such as age and sex.
Developmental assessmentRepeated assessment information on individual students is used to track, verify, and support student development.
Developmental portfolioA portfolio designed to show student progress by comparing products from early and late stages of the student’s academic career.
Direct measureStudents demonstrate that they have achieved a learning objective.
Educational effectivenessHow well a program or institution promotes student development.
Embedded assessmentAssessment activities occur in courses. Students generally are graded on this work, and some or all of it also is used to assess program learning objectives.
Face validityA form of validity determined by subjective evaluation by test takers or by experts in what is being assessed.
Focus groupsPlanned discussion among groups of participants who are asked a series of carefully constructed questions about their beliefs, attitudes, and experiences.
Formative assessmentAssessment designed to give feedback to improve what is being assessed.
Formative validityHow well an assessment procedure provides information that is useful for improving what is being assessed.
Generalizable resultsResults that accurately represent the population that was sampled.
GoalsGeneral statements about knowledge, skills, attitudes, and values of expected graduates.
Halo effectA problem that occurs when judgments are influenced by each other.
Holistic rubricA rubric that involves one global, holistic judgment.
Indirect measureStudents (or others) report perceptions of how well students have achieved an objective.
Inter-rater reliabilityHow well two or more raters agree when decisions are based on subjective judgments.
Learning objectiveA clear, concise statement that describes how students can demonstrate their mastery of a program goal.
Likert scaleA survey format that asks respondents to indicate their degree of agreement. Respondents generally range from “strongly disagree” to “strongly agree.”
MissionA holistic vision of the values and philosophy of a program, department, or institution.
Norms/norm groupResults that are used to interpret the relative performance of others; for example, test results might be compared to norms based on samples of college freshman or college graduates.
ObjectivityFaculty have an unbiased attitude throughout the assessment process, including gathering evidence, interpreting evidence, and reporting the results.
Open-ended questionA question which invites respondents to generate long replies, rather than just a word or two.
Percentage of agreementAn indicator of inter-rater reliability.
Performance measureStudents exhibit how well they have achieved an objective by doing it, such as a piano recital.
Pilot studyAn abbreviated study to test procedures before the full study is implemented.
PortfolioCompilation of student work. Students often are required to reflect on their achievement of learning objectives and how the presented evidence supports their conclusions.
Program assessmentAn ongoing process designed to monitor and improve student learning. Faculty develop explicit statements of what students should learn, verify that the program is designed to foster this learning, collect empirical data that indicate student attainment, and use these data to improve student learning.
Purposeful sampleA sample created using predetermined criteria, such as proportional representation of students at each class level.
Qualitative assessmentAssessment findings are verbal descriptions of what was discovered, rather than numerical scores.
Quantitative assessmentAssessment findings are summarized with a number that indicates the extent of learning.
Recall itemA test item that requires students to generate the answer on their own, rather than to identify the answer in a provided list.
Recognition itemA test item that requires students to identify the answer in a provided list.
Reflective essaysRespondents are asked to write essays on personal perspectives and experiences.
ReliabilityThe degree of measurement precision and stability for a test or assessment procedure.
Representative sampleAn unbiased sample that adequately represents the population from which the sample is drawn.
Response rateThe proportion of contacted individuals who respond to a request.
RubricAn explicit scheme for classifying products or behaviors into categories that are steps along a continuum.
Standardized testA test which is administered to all test takers under identical conditions.
Structured interviewInterviewers ask the same questions of each person being interviewed.
Summative assessmentAssessment is designed to provide an evaluative summary.
Summative validityAssessment accurately evaluates what is being assessed.
Surface learningLearning based on memorization of facts without deep understanding of what is learned.
SurveyA questionnaire that collects information about beliefs, experiences, or attitudes.
Traditional measureStudents exhibit how well they have achieved an objective by taking traditional tests, such as multiple-choice tests.
TriangulationMultiple lines of evidence lead to the same conclusion.
Unstructured interviewInterviewers are allowed to vary their questions across interviewees.
ValidityHow well a procedure assesses what is supposed to be assessing.
Value-added assessmentStudent learning is demonstrated by determining how much students have gained through participation in the program.

* Note: The definitions listed below were taken from Mary J. Allen’s (2004) book Assessing Academic Programs in Higher Education