Assessment Glossary

Definitions are taken verbatim from sources. Some terms have multiple definitions to reflect different accreditation organizations.

Academic Freedom: The ability to engage differences of opinion, evaluate evidence and form one’s own grounded judgments about the relative value of competing perspectives. The definition implies not just freedom from constraint but also freedom for faculty, staff and students to work within a scholarly community to develop intellectual and personal qualities. (HLC Proposed Criterion Revision).

Academic Offerings: Any educational experience offered at an institution for academic credit. This includes, but is not limited to, degree and certificate programs and courses (HLC Proposed Criterion Revision).

Accuracy: The extent to which an evaluation is truthful or valid in what it says about a program, project, or material (Westat, J. 2002).

Achievement: Performance as determined by some type of assessment or testing (Westat, J. 2002).

Affect: Consists of emotions, feelings and attitudes. (Westat, J. 2002)

Anonymity (provision for): Evaluator action to ensure that the identity of subjects cannot be ascertained during the course of a study, in study reports, or in any other way (Westat, J. 2002).

Appropriate to Higher Education: Curricular and Co-curricular programming of the quality and rigor for the degree level that prepares students to think critically and function successfully. It is distinctly different from k-12 education (HLC Proposed Criterion Revision).

Assessment and Evaluation: Are used as ordinary language synonyms. When a narrower referent is intended, the terms are modified, as in “assessment of student learning” or “evaluation of academic services.” (HLC).

Assessment: Is the ongoing process of:

  • establishing clear, measurable outcomes of student learning,
  • ensuring that students have sufficient opportunities to achieve those outcomes,
  • systematically gathering, analyzing and interpreting evidence to determine how well students have matched those expectations,  
  • and using the resulting information to understand and improve student learning (Suskie, L., 2009, p.4).

Assessments: Offer a framework through which you can identify, collect and prepare data to evaluate the attainment of student outcomes and program educational objectives. (Accreditation Board for Engineering and Technology, Inc. (ABET).

Assessment Cycle: Is the longitudinal process of assessment including design, pilot, revise, train, assess, analyze, intervene, and reassess (Hatfield, S., & Rogers, G., 2018).

Assurance of Learning: Refers to processes for demonstrating that students achieve learning expectations for the programs in which they participate (Association to Advance Collegiate Schools of Business (AACSB, p. 32-33).

Attitude: A person’s opinion about another person, thing, or state (Westat, J. 2002).

Attrition: Loss of subjects from the defined sample during the course of data collection (Westat, J. 2002).

Authentic Assessment: Alternative to traditional testing that focuses on student skill in carrying out real-world tasks (Westat, J. 2002).

Auxiliary Activities and Services: Related to, but not intrinsic to, educational functions: dining services, student housing, faculty or staff housing, intercollegiate athletics, student stores, a Public Radios station, etc. In many institutions, “auxiliary” simultaneously denotes a segregated budget and dedicated revenues (HLC Proposed Criterion Revision).

Behavioral Objectives: Measurable changes in behavior that are targeted by a project (Westat, J. 2002).

Bias: A point of view that inhibits objectivity (Westat, J. 2002).

Bloom’s Taxonomy of Learning Domains: Is a hierarchical ordering of learning objectives by complexity and specificity to promote higher levels of learning.

Case Study: An intensive, detailed description and analysis of a single project, program, or instructional material in the context of its environment (Westat, J. 2002).

Categorical scale: A scale that distinguishes among individuals by putting them into a limited number of groups or categories (Westat, J. 2002). If the categories have numbers assigned to them, the number does not refer to a quantity or amount but to a type or kind of category.

Civic Engagement: Community service or any number of other efforts (by individuals or groups) intended to address issues of public or community concern (HLC Proposed Criterion Revision).

Classroom Grading: Course performance judged by the instructor of record for a course. It can provide an indirect measure of student learning.  …However, uncorroborated judgment within a class does not typically meet the more strenuous requirements advocated by accrediting agencies (Pusateri, T, 2009).

Closing the Loop: Taking the evidence from assessment and determining what future action should be taken to improve student learning. Or making appropriate changes in the curriculum based on assessment results (AACSB, p. 70).

Co-curricular: Learning activities, programs and experiences that reinforce the institution’s mission and values and complement the formal curriculum. Examples: Study abroad, student-faculty research experiences, service learning, professional clubs or organization, athletics, honor societies, career services, etc. (see also entry for academic offerings) (HLC Proposed Criterion Revision).

Competence/Competency: Effective application of available knowledge, skills, attitudes and values (KSAVs) in complex situations. The essential knowledge, skills and other attributes (KSO;s) that are essential for performing a specific task or job (Council on the Accreditation of Health Management Education (CAHME).

Competence/Competency Assessment: Measure of student attainment of the KSOs that is undertaken by a Program at the course and Program level using direct and indirect measures (CAHME).

Competency: General statement of student learning. Lacks context and is unmeasurable. General statements of skill areas in which students should be competent (Hatfield, S., & Rogers, G., 2018).  

Competency Levels: The target level of knowledge, skills and other attributes (KSOs) that align with the anticipated positions graduates will attain upon completion of the Program. Programs are expected to define the scale used to assess competency attainment, establish target levels of attainment for each competency, and measure students against the scale. CAHME does not require Programs to target expert levels of competency attainment unless this aligns with their mission (CAHME).

Criterion-referenced Test: Test whose scores are interpreted by referral to we-defined domains of content or behaviors, rather than by referral to the performance of some comparable group of people (Westat, J. 2002).

Cross-sectional Study: A cross-section is a random sample of a population, and a cross-sectional study examines this sample at one point in time. Successive cross-sectional studies can be used as a substitute for a longitudinal study. For example, examining today’s first year students and today’s graduating seniors may enable the evaluator to infer that the college experience has produced or can be expected to accompany the difference between them. The cross-sectional study substitutes today’s senior for a population that cannot be studied until 4 years later (Westat, J. 2002).

Direct Assessment: Federal regulations define a direct assessment competency-based educational Program as an instructional Program that, in lieu of credit hours or clock hours as a measure of student learning, uses direct assessment of student learning relying solely on the attainment of defined competencies, or recognizes the direct assessment of student learning by others (CAHME).

Direct Evidence: Is tangible, visible, self-explanatory, and compelling evidence of what students have learned and not learned. The product accessed has grading criteria with rigorous standards (e.g. student papers accessed with a detailed grading rubric) (Suskie, L., 2009, p.20).

Direct Measures: Are based on student performance of Program activities within courses or Program-sponsored experiential learning opportunities (CAHME).

Diversity: Valuing and benefiting from personal differences. These differences address many variables including, race, religion, color, gender, national origin, disability, sexual orientation, age, education, geographic origin, and skill characteristics as well as differences in ideas, thinking, academic disciplines, and perspectives and must be in accordance with the applicable state/provincial and federal laws (CAHME).

Domain: A group of competencies that are related. A broad, distinguishable area of competence that provides a general descriptive framework. A specified sphere of activity or knowledge (CAHME).

Embedded Assessment: Departments demonstrate efficient planning when they embed assessment practices in existing coursework. The department agrees in which courses this data collection should occur and collectively designs the strategy and uses the data to provide feedback about student progress within the program (Pusateri, T, 2009).

Expected Outcomes: Broad or high-level statements describing impacts the school expects to achieve in the business and academic communities it serves as it pursues its mission through educational activities, scholarship, and other endeavors. Expected outcomes translate the mission into overarching goals against which the school evaluates its success (AACSB, p. 16). See chart below.

Evaluation Processes: Interpret the data and evidence accumulated through the assessment process and determine the extent to which student outcomes and program educational objectives are being attained. Thoughtful evaluation of findings is essential to ensure that decisions and actions taken as a result of the assessment process will lead to program improvement (ABET).  

General Education: Is the operationalization of institutional outcomes OR the foundation for study in the major (Hatfield, S., & Rogers, G., 2018).

Goal: (often used interchangeably with objective and outcome) States what your college or program aims to achieve (e.g. students learn cultural competence). Broad concepts or categories of expected learning (Hatfield, S., & Rogers, G., 2018). See chart below.

Goals and Outcomes: Are used inconsistently by member institutions in the context of assessment of student learning, to the extent that one institution’s goal may be another’s outcome and vice versa. When they use either term, the Criteria indicate through context whether the term refers to the learning intended or to how much students actually learn (HLC). See chart below.

Good Practice: Practice that is based in the use of processes, methods and measures that have been determined to be successful by empirical research, professional organizations and/or institutional peers (HLC Proposed Criterion Revision).

Indirect Evidence: Proxy signs that students are learning. This type of evidence is less clear and convincing (e.g. course grades, student/alumni attitudes, student participation rates in research, career placement, etc.). Student reflection where they report, describe, or reflect on their learning is also a form of indirect assessment (Hatfield, S., & Rogers, G., 2018).  

Indirect Measures: Are based on perceptions of learning such as student self-assessments, focus groups, or surveys (CAHME).

Informed Citizenship: Having sufficient and reliable information about issues of public concern and having the knowledge and skills to make reasonable judgments and decisions about them (HLC Proposed Criterion Revision).

Integrative Experiences. The combining of a variety of learnings from the Program curriculum into a single coursework environment such as an experiential field experience (for example, an administrative residency or administrative internship), or a capstone course, which makes course content relevant to career advancement: the collection of skills, knowledge and abilities developed over the didactic curriculum (CAHME).

Inter-rater Reliability: A measure of the extent to which different raters score an event or response in the same way (Westat, J. 2002).

Intervention: Project feature or innovation subject to evaluation (Westat, J. 2002).

Learning Goals: State the educational expectations for each degree program. They specify the intellectual and behavioral competencies a program is intended to instill. In defining these goals, the faculty members clarify how they intend for graduates to be competent and effective as a result of completing the program (AACSB, p. 32). See chart below.

Learning Objectives: Brief, clear, specific statements of what students will be able to perform at the conclusion of instructional activities (CAHME). See chart below.

Learning Outcome: Statement identifying what students will be able to do as the result of study in the program. Format for learning outcomes: Students should be able to [action verb] [something] (See objective and goal; Hatfield, S., & Rogers, G., 2018).

Longitudinal Study: An investigation or study in which a… group of individuals is followed over a substantial period of time to discover changes that may be attributable to the influence of the treatment, or to maturation, or the environment (Westat, J. 2002). A study designed to follow subjects forward through time (CAHME).

Norm-referenced Tests: Tests that measure the relative performance of the individual or group by comparison with the performance of the individuals or groups taking the same test (Westat, J. 2002).

Objective: (often used interchangeable with goal and outcome) Are more detailed aspects of goals. A goal might be for students to have the ability to explain concepts in writing, more detailed objectives might be for students to have the ability to write essays and critique the writing of their peers. See chart below.

Pre-post Comparisons: Departments may measure knowledge and abilities on the front end of a program to establish baseline for their students. They may re-administer the same instrument at the conclusion of the program to determine “value added” by the student’s educational experiences (APA).

Performance Indicators: Represent the knowledge, skills, attitudes or behavior students should be able to demonstrate by the time of graduation that indicate competence related to the outcome. (ABET). Specific, measurable statements identifying what students will be able to do as the result of study in the program. Well stated Performance indicators (1) provide faculty with clear direction for classroom implementation and (2) make expectations explicit to students (Hatfield, S., & Rogers, G., 2018).

Program Educational Objectives: Are based on the needs of the program’s constituencies and are expressed in broad statements that describe what graduates are expected to attain within a few years of graduation (ABET).

Program Review: Comprehensive evaluation of an academic program that is designed to both foster improvement and demonstrate accountability. Assessment results can be incorporated into program review but PR is broader than assessment.

Purposive Sampling: Creating samples by selecting information-rich cases from which one can learn a great deal about issues of central importance to the purpose of the evaluation (Westat, J. 2002).

Qualitative Evaluation: The approach to evaluation that is primarily descriptive and interpretative (Westat, J. 2002).

Quantitative Evaluation: The approach to evaluation involving the use of numerical measurement and data analysis based on statistical methods (Westat, J. 2002).

Random Sampling: Drawing a number of items of any sort from a larger group or population so that every individual item has a specified probability of being chosen (Westat, J. 2002).

Representative Sample: A subset of assessment products (e.g. papers, test scores) selected using random sampling procedures so that every product has an equal chance of being selected. Random sampling should result in assessment results accurately representing the population from which products were drawn. 

Standardized Tests: Tests that have standardized instructions for administration, use, scoring, and interpretation with standard printed forms and content. They are usually norm-reference tests but can also be criterion referenced (Westat, J. 2002).

Rubric: A detailed guide for scoring an assessment product (e.g. paper, presentation writing assignment). An analytic rubric describes different levels of performance. Each Performance Indicator has Performance Descriptions that describe varying performance levels (Hatfield, S., & Rogers, G., 2018).

Standard: When issued by regulatory, accreditation, continuing education, or other professional education and training certification organizations, each of which has enforcement authority, the term standard is typically used to refer to a minimal (threshold) requirement (APA).

Student Learning Outcomes: Statements that clearly state the expected knowledge, skills, attitudes, competencies and habits of mind that students are expected to acquire at an institution of higher education (National Institute for Learning Outcomes Assessment (NILOA).

Student Outcomes: Relate to the knowledge, skills and behaviors that students acquire as they progress through the program and describe what students are expected to know and be able to do by the time of graduation. Defining educational objectives and student outcomes provides faculty with a common understanding of the expectations for student learning and supports consistency across the curriculum, as measured by performance indicators (ABET). Education-specific results to measure against the objectives or standards for the educational offerings. Examples could be results from licensure or standardized exams, course and program persistence, graduation rates and workforce data (HLC Proposed Criterion Revision). See chart below.

Threshold, Target, Standards, Benchmarks, or Competency Benchmarks: Are specific targets against which we gauge success in achieving an outcome (e.g. 95% of students will be rated as “acceptable” on the research paper rubric). 

Triangulation: In an evaluation, an attempt to get corroboration on a phenomenon or measurement by approaching it by several (three or more) independent routes. This effort provides confirmatory measurement (Westat, J. 2002).

Validity: The soundness of the inferences made from the results of a data-gathering process (Westat, J. 2002).

 

Accreditor Goals Objectives Outcomes
AACSB Learning goals state the broad, conceptual educational expectations for each degree program. They specify the intellectual and behavioral competencies a program is intended to instill. In defining these goals, the faculty members clarify how they intend for graduates to be competent and effective as a result of completing the program. Objectives are the specific measurable definitions of learning outcomes. 

Expected Outcomes broad or high-level statements describing impacts the school expects to achieve in the business and academic communities it serves as it pursues its mission through educational activities, scholarship, and other endeavors. Expected outcomes translate the mission into overarching goals against which the school evaluates its success

ABA     Standard 302 provides that law schools identify desired learning outcomes.
ABET   Educational Objectives are based on the needs of the program’s constituencies and are expressed in broad statements that describe what graduates are expected to attain within a few years of graduation. Student outcomes relate to the knowledge, skills and behaviors that students acquire as they progress through the program and describe what students are expected to know and be able to do by the time of graduation.
CAHME   Learning Objectives Brief, clear, specific statements of what students will be able to perform at the conclusion of instructional activities.  
CSWE     Outcome “the percentage of students who met the criterion set by the program.” Defining educational objectives and student outcomes provides faculty with a common understanding of the expectations for student learning and supports consistency across the curriculum, as measured by performance indicators.
HLC

Goal Broad concepts or categories of expected learning. (Hatfield, S. Rogers, G., 2018)

Goals and outcomes are used inconsistently by member institutions in the context of assessment of student learning, to the extent that one institution’s goal may be another’s outcome and vice versa. When they use either term, the Criteria indicate through context whether the term refers to the learning intended or to how much students actually learn.
 

Outcomes Statement identifying what students will be able to do as the result of study in the program Format: Students should be able to [action verb] [something]. (Hatfield, S. Rogers, G., 2018)

Goals and outcomes are used inconsistently by member institutions in the context of assessment of student learning, to the extent that one institution’s goal may be another’s outcome and vice versa. When they use either term, the Criteria indicate through context whether the term refers to the learning intended or to how much students actually learn.

Student outcomes Education-specific results to measure against the objectives or standards for the educational offerings. Examples could be results from licensure or standardized exams, course and program persistence, graduation rates and workforce data. (HLC Proposed Criterion Revision)

References

Accreditation Board for Engineering and Technology, Inc. (n.d.). Assessment Planning. Retrieved from http://www.abet.org/accreditation/get-accredited/assessment-planning/.

The Association to Advance Collegiate Schools of Business (AACSB) (July 1, 2018). 2013 Eligibility Procedures and Accreditation Standards for Business Accreditation. Retrieved from https://www.aacsb.edu/-/media/aacsb/docs/accreditation/business/standards-and-tables/2018-business-standards.ashx?la=en.

Commission on the Accreditation of Health Management Education (May, 2018). Self-Study Handbook for Graduate Programs I Healthcare Management Education. https://cahme.org/files/resources/CAHME_Self_Study_Handbook_Fall2017_RevisedMay2018.pdf.

Council on Social Work Education. 2013. Exploring EPAS 2008 Standard 4- Assessment. The COA Assessment Committee  https://www.cswe.org/CMSPages/GetFile.aspx?guid=5e4cfd55-ca9e-490f-ab81-4bacbc9c7740.

The Higher Learning Commission. (n.d.) Criteria for Accreditation Terminology. Retrieved from https://www.hlcommission.org/Policies/glossary-new-criteria-for-accreditation.html.

Hatfield, S., & Rogers, G. (2018). Higher Learning Commission, Assessing General Education Workshop Material.

Higher Learning Commission. Beta Revision November 2018. Draft Criteria for Accreditation. Retrieved from http://download.hlcommission.org/ProposedCriteriaRevision_2018-11_POL.pdf.

National Institute for Learning Outcomes Assessment (NILOA). (n.d.) Providing Evidence of Student Learning: A Transparency Framework. Retrieved from http://www.learningoutcomeassessment.org/TFComponentSLOS.html.

National Science Foundation. 2002. The 2002 User-Friendly Handbook for Project Evaluation. Retrieved from https://www.nsf.gov/pubs/2002/nsf02057/nsf02057.pdf.

Pusateri, Thomas (2009). The Assessment CyberGuide for Learning Goals and Outcomes. APA Board of Education Affairs’ (BEA), American Psychological Asspciation (APA). Retrieved from https://www.apa.org/ed/governance/bea/assessment-cyberguide-v2.pdf.

Suskie, Linda. (2009). Assessing student learning: A common sense guide. San Francisco: Jossey-Bass.

Joy Frechtling Westat. (2002) The 2002 user friendly handbook for project evaluation. Retrieved from: https://www.nsf.gov/pubs/2002/nsf02057/nsf02057.pdf.