For departments with Capstone Majors or Capstone Programs (see UCLA’s Capstone Initiative ), the assessment of student learning outcomes should revolve around the final capstone product (e.g., performance, project, paper, etc.). Once the defining characteristics and levels of achievement for each learning outcome that will be featured within a particular assessment cycle have been operationalized, program faculty are ready to evaluate capstone products for evidence of student learning within a given realm. Typically, a faculty curriculum or assessment subcommittee would be responsible for this evaluation.
Sampling decisions are left to the discretion of the faculty. Within a given program, faculty may decide to review all capstone products from a particular student cohort. Or they may elect to review the work of random samples of students within or across cohorts; take systematic samples (e.g., every 5th student in a particular cohort); or draw purposeful samples of student work based on some pre-determined criteria (e.g., lowest, middle, and highest 10% of performers).
The next two-step process is to operationally define the characteristics of each learning outcome:
Step 1: | For each learning outcome that will be featured within a particular assessment cycle, clearly define each characteristic to be assessed. This will enable faculty who are responsible for conducting the evaluation to work from a common frame of reference when evaluating student work. Take, for example, the learning outcome “Students completing the major will demonstrate effective written communication skills.” Effective writing could be illustrated by the following five characteristics:
|
Step 2: | Describe the different levels of achievement for each characteristic of the learning outcome(s) that will be assessed during a particular assessment cycle. For example, what do faculty concur constitutes “excellent,” “good,” “fair,” or “poor” performance within each of the five characteristics of writing noted in Step 1? For instance, excellent performance in writing “development” might be defined by logical and cohesive organization of an argument; seamless development of the argument; lack of significantly extraneous elements; and inclusion of evidence that contributes to persuasiveness of the argument. |
Upon completing their review of student work, program faculty are advised to:
- Reflect on how assessment findings may inform pedagogical practice and/or curricular planning. An important part of this process involves engaging faculty colleagues and, as applicable, students and/or other educational partners in discussing the results before final interpretations are formed. Questions you may want to address include:
- What are the most valuable insights gained from the assessment results?
- What are the most important conclusions about the results?
- What strengths (and weaknesses) in student learning do the results indicate?
- What implications are there for enhancing teaching and learning?
- Determine the effectiveness and limitations of the assessment process. Questions to consider could include:
- Did the process define, as well as answer, questions that are important to understanding and enhancing student learning? If not, why?
- Were faculty and students motivated to participate in the assessment process? If not, why?
- Were the assessment methods easily implemented? If not, what improvements could be made?
- In what ways was the assessment process especially effective?
- What should (or will) change about the process? Why?
- Communicate findings and associated implications with those who are involved with the program.
- Incorporate discussion of assessment process and findings within 8-year program review.