Choosing Assessment Methods

Once a program has developed a set of student learning outcomes, faculty members can focus on these and ask, “How can we measure this outcome? What specific student behaviors, skills, or knowledge demonstrates that they have learned what we expected them to learn? How could we convince a skeptic that our teaching has achieved a successful outcome?”

Programs should choose assessment goals that they themselves will find useful, that is, gather information on those aspects of student learning that the faculty members really want to know about. Assessment expert Barbara Walvoord advises, “Instead of focusing on compliance, focus on the information you need for wise action. Remember that when you do assessment. . . you are not trying to achieve the perfect research design; you are trying to gather enough data to provide a reasonable basis for action.” (Barbara E. Walvoord, Assessment Clear and Simple:  A Practical Guide for Institutions, Departments, and General Education, Second Edition, San Francisco: Jossey-Bass, 2010, page 5)

Assessment need not involve a great deal of additional work for the faculty members. Methods chosen must be reasonable in terms of time and implementation. Often, programs can build upon measures already in place, such as capstone courses, juries, or exhibitions, combined with new assessment tools like rubrics. Assignments and tests already being used in courses are potential sources of evidence, and the work of assessment can be done alongside the work of grading if planned properly.

Ultimately, programs will use multiple methods of assessment, some combination of direct and indirect and qualitative and quantitative. Direct evidence of student learning comes from the work produced by students, such as examinations, written papers, capstone projects, portfolios, graded performances, and exhibitions. These products demonstrate actual learning. Indirect evidence can come from the perceptions of students and other stakeholders (such as employers) as to how students have achieved a program’s goals, perceptions reported through focus groups, surveys, and other research methods. Indirect evidence can also come from other indicators that imply the achievement of program learning outcomes, for example, job placements, graduate school placements, aggregated grades, etc.

Why Don’t Grades Count as Assessment?

One of the most common questions from faculty members is why assessment is necessary given that students receive grades. Doesn’t a grade already demonstrate that a student has met the course learning goals?

The answer is that, while a grade is a global indicator as to how one student has performed in a course, it does not directly indicate which learning goals the student has and has not met. A grade of “B” or a “C” may indicate that a student met most goals but not all. Even an “A” grade does not necessarily mean that a student achieved all the instructor’s learning goals, since course assignments and tests may not be directly related to underlying learning goals. Also, grades are often partly based on behavior like course attendance and participation, which are not usually learning goals.

Similarly, overall grade point averages in a program do not indicate whether specific program goals are being achieved. If students in a program have a cumulative GPA of 3.5, what does that tell us about the overall strengths and weaknesses of learning in that program? Program learning is cumulative, but students’ performances in individual classes does not necessarily indicate that they are achieving the broader program goals. To evaluate student learning at the program level, it is necessary to examine specific areas of learning separately.

 
Connect with the New School