One cornerstone of effective educational assessment today is gathering evidence of student learning. Evidence, of course, is not an unfamiliar construct to those who are researchers. And, at first impression, it may seem unnecessary within the context of a research university to discuss what constitutes “good” evidence. It is important, however, that we develop a shared institutional understanding of how accrediting agencies, including the Western Association of Schools an Colleges (WASC), define evidence. At the program level, decisions about how to most appropriately collect necessary evidence of student learning must also be made.
How does WASC define “evidence” of student learning?
According to WASC, evidence should:
- cover core knowledge and skills that are developed through the program’s curriculum.
- involve multiple judgments of student performance.
- provide information on multiple dimensions of student performance.
Good evidence is also relevant, verifiable, representative or typical, cumulative, actionable, and reflectively analyzed.
What is the difference between “direct” and “indirect” evidence?
Traditional approaches to educational assessment have relied disproportionately on indirect evidence pertaining to students’ self-perceptions of their learning and their perspectives on program structure and curricular content. Examples include survey responses and results of focus groups or interviews. While it provides potentially very useful information to faculty, indirect evidence is simply not designed to provide answers to fundamental questions about the degree to which students have met specific learning outcomes.
In accordance with changing federal expectations, effective assessment plans today thus necessarily also involve collecting direct evidence of student learning. Direct measures are those derived through the faculty’s systematic analysis of student projects, exams, or sets of specified course assignments. As such, they can make a compelling case for the extent to which students have achieved expected learning outcomes. Today, the most powerful components of educational effectiveness within undergraduate teaching and learning are: (a) thoughtfully constructed direct and indirect measures of student learning that are (b) assessed by program faculty as a collective body of evidence pertaining to educational effectiveness and considered for purposes of curricular review and development.
Shouldn’t faculty-assigned grades suffice as the primary indicator of learning?
In recent years, the usefulness of passing grades as indicative of the amount and quality of student learning has been questioned by various higher education stakeholders based on the national phenomena of grade inflation, the potentially great variability between instructors in terms of how grades are assigned, and the belief that grades are too global an indicator to provide the type of detailed feedback that is required for individual or program level improvement. Assigning grades in individual courses is still important, but no longer endorsed by accrediting agencies as sufficient independent evidence of learning quality. The availability of other, direct types of evidence is critical.
How do we gather “direct” evidence of student learning?
There are many approaches to gathering direct evidence of student learning. The utility, and feasibility, of any particular approach varies depending on program structure, size, philosophy, etc. At UCLA, we have identified three main pathways by which we believe that academic programs can most effectively, and efficiently, meet current federal and associated accreditation expectations for engaging in outcomes-based assessment that provides direct evidence of undergraduate student learning:
- Assessing final products from capstone experiences. UCLA has recently implemented a process for certifying “Capstone Majors” (all students completing the major have a required capstone experience) and “Capstone Programs” (at least 60% of students in the major complete a capstone). For programs that offer capstones, learning outcomes that are specifically tailored to that culminating academic experience necessarily reflect valued program goals. Departmental evaluation of samples of students’ capstone projects, papers, performances, or other products subsequently provides direct evidence of student learning.
- Creating program portfolios based on course-embedded assessment. Traditionally, portfolios have been conceived as student compiled collections of their work. However, rather than ask students to prepare individual portfolios, faulty can create “program portfolios” composed of samplings of students’ work related to specific learning outcomes. Relevant student material (e.g., assignments, exam questions, entire tests, in-class activities, fieldwork activities, and/or homework assignments) from selected courses can be identified, a sampling scheme can be decided upon, and appropriate items can be collected and evaluated.
- Administering standardized tests, licensure exams, or program-developed senior exit exams. The Educational Testing Service and other companies offer standardized tests for various types of learning outcomes such as critical thinking or mathematical problem solving. Scores on tests such as the GRE or various licensure exams also can be used as direct evidence of student learning. Program faculty might also decide to develop a test for majors that is reflective of the program’s mission and learning outcomes.
Please keep in mind that none of these approaches is inherently “better” than any of the others. Decisions about which to use should be determined by program faculty based on feasibility and manageability. Remember, too, that while direct evidence is essential, supporting evidence that is indirect in nature (e.g., that provided by student responses on the UCLA Senior Survey, other departmental surveys of student perceptions, exit interviews, and alumni or employer surveys) can also provide valuable indicators of educational effectiveness.