Assessment plan: The document that demonstrates how the program will assess the upcoming year’s performance. This document includes: Mission Statement, Goals, Learning outcomes, Curriculum, Criteria, & Methods.
Assessment report: The document that presents data and discusses how assessment results will be used to change curriculum and/or assessment procedure for the coming year. That is, the two key components of this report include the Results and Use of Results.
Assessment: The ongoing process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance. When it is embedded effectively within our institutional system, assessment can help us focus our collective attention, examine our assumptions, and create a shared academic culture dedicated to assuring and improving the quality of higher education (3). The systematic collection, review and use of information about educational programs undertaken for the purpose of improving student learning and development (11, 12)
Benchmarking: A criterion-referenced objective. Performance data that are used for comparative purposes. A program can use its own data as a baseline benchmark against which to compare future performance. It can also use data from another program as a benchmark. In the latter case, the other program often is chosen because it is exemplary and its data are used as a target to strive for, rather than as a baseline. (7)
Bloom’s taxonomy: Six levels in which cognitively related objects can be categorized by levels of increasing complexity; the revised levels are Remember, Understand, Apply, Analyze, Evaluate, and Create (2).
Course-level assessment: Assessment to determine the extent to which a specific course is achieving its learning goals and outcomes, as well as, assessment to improve teaching of specific courses or segments of courses. (6) Compare with program-level assessment.
Criteria: Describes relevant measures that will be used; states precisely what students will be doing; explains the conditions under which students will perform the task; states an acceptable level of aggregate performance.
Curriculum: States where in the curriculum students will be exposed to the necessary materials which will allow them to achieve the learning outcome (e.g. specific courses, co-curricular opportunities).
Curriculum map: Curriculum maps demonstrate where in the program’s curriculum learning outcomes are being addressed. In essence, a curriculum map consists of a table with two axes, one pertaining to program learning outcomes, the other to courses in the major. “Mapping” program outcomes to course outcomes shows how students develop skills and knowledge in courses that are required for their programs of study.
Exit surveys and interviews: Information obtained from students on completion of their study. This typically includes information about student growth and change, satisfaction with academic programs, their experiences in their majors, and their immediate and future plans. (12)
Focus group: A carefully planned discussion to obtain perceptions on a defined area of interest in a permissive, nonthreatening environment. It is conducted with approximately 7 to 10 people by a skilled interviewer. (8)
Formative assessment: Formative assessment is conducted during the life of a program (or performance) with the purpose of providing feedback that can be used to modify, shape, and improve the program (or performance). (12) Compare with summative assessment.
Goals: Goal statements are broad, but provide a more detailed discussion of the general aims of the program that support the mission. Goal statements describe intended outcomes for students/graduates of the program in very general terms. The goal statements must list intended outcomes dictated by the mission statement.
Indirect measures: Assessment methods that involve perceptions of learning rather than actual demonstrations of learning outcome achievement. For example, a student survey about whether a course helped develop a greater sensitivity to diversity or an employer survey asking for feedback on graduates’ skills (9). Compare with direct measures.
According to SACS,
- 3.3 Institutional Effectiveness
- 3.3.1 The institution identifies
expected outcomes, assesses the extent to which it achieves these outcomes, and
provides evidence of improvement based on analysis of the results in each of
the following areas:
- 220.127.116.11 educational programs, to include student learning outcomes
- 18.104.22.168 administrative support services
- 22.214.171.124 educational support services
- 126.96.36.199 research within its educational mission, if appropriate
- 188.8.131.52 community/public service within its educational mission, if appropriate (13)
- 3.3.1 The institution identifies expected outcomes, assesses the extent to which it achieves these outcomes, and provides evidence of improvement based on analysis of the results in each of the following areas:
Learning outcomes: A statement that describes the measurable skills, knowledge, and attitudes that students should be able to do or demonstrate as a result of the course or program. Learning outcomes should be SMART: Specific, Measurable, Agreed Upon, Realistic, and Time Framed.
Mission statement: The mission statement is usually a short, one paragraph general explanation of what the program is, and why the program exists. The mission statement should not be a mission discussion; keep it short and very general. Avoid words and phrases generally not used in the English language. At the program level, support from the university administration is gained if the program mission statement supports the college or school mission statement.
Norm-referenced: A norm-referenced test is one designed to highlight achievement differences between and among students to produce a dependable rank order of students across a continuum of achievement from high achievers to low achievers. (4)
Portfolio: Collections of multiple student work samples usually compiled over time and rated using rubrics. The design of the portfolio is dependent upon how the scoring results are going to be used. (4)
Qualitative assessment: Assessments that rely on description rather than numerical scores or ratings. The emphasis is on the measurement of opinions, reflections and/or judgments. Examples include interviews, focus groups, and observations. (9) Compare with quantitative assessment.
Quantitative assessment: Assessments that rely on numerical scores or ratings. The emphasis is on the use of statistics, cumulative numbers, aggregated data, and numerical measurements. (9) Compare with qualitative assessment.
Results: Presents the findings from the data that have been collected and analyzed in a simple, easily understood format; accurately depicts the findings relevant to each learning outcome at the program level.
Rubric: A set of categories that define and describe the important components of the work being completed, critiqued, and assessed. Each category contains a graduation of levels of completion or competence with a score assigned to each level and a clear description of what criteria need to be met to attain the score at each level. (4)
Summative assessment: An assessment that is done at the conclusion of a course or some larger instructional period (e.g., at the end of the program). The purpose is to determine success or to what extend the program/project/course met its goals and learning outcomes. (10) Compare with formative assessment.
Use of Results: Explains how specific results from assessment activities will be used for decision-making, strategic planning, program evaluation and improvement; assists in documenting changes and the reasons for the changes.
Value-added (growth or pre-post): Compares results against student scores when they started or entered the program to the end of the program or course of study (4); student learning is demonstrated by determining how much students have gained through participation in the program (1).
- Allen, M. J. (2004). Assessing academic programs in higher education. Bolton, MA: Anker.
- Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom's Taxonomy of educational objectives: Complete edition, New York: Longman.
- Angelo, T. A. (1995) Reassessing (and Defining) Assessment. American Association for Higher Education (AAHE) Bulletin, 48 (2), November 1995, pp. 7-9.
- Assessment Services of Northern Illinois University (2008). Assessment terms glossary. Retrieved February 25, 2008, from http://www.niu.edu/assessment/resources/terms.shtml
- Bordens, K.S., & Abbott, B.B. (1997). Research design and methods: A process approach (4th ed.). Mountain View, CA: Mayfield.
- Burns, M.; Fager, J.; Gumm, A.; Haley, A.; Krider, D.; Linrud, J.; Osborn, W.; Riebschleger, J.; Shahabuddin, S.; & Webster, D. (2008). Central Michigan University Assessment Toolkit. Retrieved February 25, 2008, from http://academicaffairs.cmich.edu/caa/assessment/resources/toolkit
- Hatry, H., van Houten, T., Plantz, M., & Greenway, M.T. (1996). Measuring program outcomes: A practical approach. Alexandria, VA: United Way of America.
- Krueger, R.A. (1994). Focus groups: A practical guide for applied research (2nd ed.). Thousand Oaks, CA: Sage.
- Oxnard College (2006). Oxnard College SLO Glossary of Terms. Retrieved February 26, 2008, from http://www.oxnardcollege.edu/faculty/slo/SLO%20Glossary%20of%20Terms.pdf
- Ozarka College. Retrieved March 5, 2008, from http://www.ozarka.edu/assessment/glossary.cfm
- Marchese, T. J. (1987). Third down, ten years to go. AAHE Bulletin. 40, 3-8.
- Palomba, C.A & Banta, T.W (1999). Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. Jossey – Bass: San Francisco.
- Southern Association of Colleges and Schools. (2008). Principles of accreditation: Foundations for Quality enhancement.Retrieved March 3, 2008, from http://www.sacscoc.org/pdf/2008%20Interim%20Principles%200108.pdf