• Assessment of Student Learning

    Assessment of student learning is the systematic process through which faculty members articulate learning goals for their students, measure students' progress toward meeting those goals, and use this information to improve their own courses, programs, and institutions.

    In this process, faculty members ask

    • What do we expect our students to learn?
    • How do we know whether they've learned it or not?
    • On the basis of the information from this assessment, what improvements in courses, programs, and the institution can we make to reach our goals for student learning?

    Program assessment is a tool that serves multiple purposes.

    • It enables faculty to gauge student learning at the program level and respond accordingly
    • It allows the university to demonstrate its effectiveness in meeting its educational goals; and
    • It is an important part of the re-accreditation process.

    The language of assessment—terms such as "measure," "outcomes," and "accountability"—may be somewhat foreign to academia, but the concepts are not. It is routine practice for faculty to engage in setting expectations and to assess student performance. However, efforts are not always systematic or made explicit to students. A systematic process of assessing student learning offers a more comprehensive and transparent method for addressing the goals of a course, a program, and a university. Assessment results considered within this broader context can provide the foundation for more purposeful and effective planning. Download the Assessment Plan Guide.

    The Provost's Office will support academic programs and departments in developing methods of assessing student learning that are responsive to the specific qualities and needs of their disciplinary and professional domains. The Assessment of Student Learning Committee advises the Provost's Office on the direction, overall character, and quality of student learning assessment. The committee has developed the guidelines listed below.

  • Guiding Principles for Assessing Student Learning at The New School

    We are committed to quality teaching and learning and to continual re-examination of our effectiveness. Assessment is a tool that enables faculty to determine how to improve and reshape student learning.

    • Program assessment is directed by faculty with the intention of improving teaching and learning and is a formalization of work in which faculty already engage.
    • Assessment is not intended and will not be used to evaluate individual faculty members.
    • Program assessment is intended to evaluate students' broad learning experiences across the curriculum and co-curricular activities.
    • Assessment will vary across programs and should be designed to be ongoing and sustainable to support continuous improvement of student learning.

    Assessments help answer such questions as

    • Do our programs help student realize their personal, work, and life goals?
    • On graduation, are they well prepared for the lives they will pursue?
    Programs are recognized for conducting meaningful assessments and reforming their curricula in response to what they learn.

    Program Assessment

    Program-level outcomes assessment is evaluation of overall student learning within a program, rather than particular instructors, courses, or individual students. Program assessment yields information about students as a group and reveals how students build and integrate learning as they move through a program of study.

    Assessing student learning at the program level often means simply shifting the lens through which faculty look at their own teaching. Instead of focusing on what particular faculty members are teaching, what courses students are taking, or what readings or assignments students are expected to complete, the program assesses what students are learning—the outcomes of faculty members’ teaching and students' work.

    The Steps of Program Assessment

    Assessment, whether at the course, program, college, or university level, can be viewed as a four-step cycle:*

    Four Step Cycle

    *As described by Linda Suskie, vice president of the Middle States Commission on Higher Education (MSCHE) in “Getting Started with Student Learning Assessment,” MSCHE Workshop, September 15, 2010.

    The first step is a clear statement of learning goals for the graduates of a program. Until assessed and proved, these learning outcomes are properly described as “intended” or “expected” outcomes. What do you want your graduates to know (cognitive objectives), to value (affective objectives), and to do (behavioral objectives) when they have completed your program? You probably intend for students to acquire specific knowledge and skills, but you may also aim to cultivate certain attitudes in students, such as valuing civic engagement or willingness to consider different points of view.

    Once program goals have been articulated, faculty members in a program can consider the learning opportunities their students are being offered to achieve the goals. Are all the learning goals adequately addressed in the existing curriculum? Have some been omitted or inadequately represented? The relationship between learning goals and curriculum can be systematically examined through an exercise called curriculum mapping (PDF).

    The next step is finding methods to determine whether students in a program are achieving the learning goals that the program’s faculty have articulated. Faculty members should choose assessment methods that help them answer questions they have about student learning.

    The last step of the cycle is to use the assessment evidence to improve the program. Faculty members are always making decisions about teaching and learning, often on the basis of informal observations and discussions. Learning assessment provides more objective and systematic evidence for faculty members to use to inform and shape their decisions. This is one goal of assessment—providing data for informed decision making.

    Assessing Student Learning in Programs at The New School

    Degree-granting programs at The New School evaluate students' learning outcomes and provide a summary of these assessments to the Provost’s Office on an annual basis. Programs may choose between two different frameworks for their student learning assessment. The standard model asks programs to select one or more student learning outcomes to assess each year and choose assessment methods for determining the extent to which students have met those outcomes. Programs using the standard model submit an annual plan (PDF) and an annual report (PDF).

    In fall 2014, a new framework was developed to highlight the role played by faculty curiosity about student learning and program effectiveness. Using this approach, a program asks one or more research questions related to student learning and selects assessment methods to address the question(s) on this template (PDF). Programs report on their findings and any changes planned as a result of the findings using this template (PDF).

    In designing an inquiry-based assessment project, program faculty may want to begin by thinking about changes in the program currently under consideration, or may take this opportunity to pose new questions about the program.

    The following are examples of questions that a program might ask:

    • What pedagogical methods are used by program faculty? Do these methods align with program goals, and are they optimal for the types of student learning desired?
    • Where is project-based teaching occurring in the program? Is it used effectively? What classes could incorporate project-based learning?
    • Is course X effective as a program requirement?
    • Are courses sequenced effectively within the program?
    • Do students have adequate opportunities to practice their writing (or other) skills?
    • Do students effectively transfer, integrate, and build upon skills and knowledge as they move from lower-level to upper-level courses?
    • Can students effectively reflect upon their learning and articulate its relevance for their career goals?
    • Would integration of student learning across the program be improved if the senior capstone was expanded from one semester to two?

    This inquiry-based approach is drawn from the work of Peggy Maki. Maki, P.L., Assessing for Learning: Building a Sustainable Commitment Across the Institution Sterling, VA: Stylus Publishing (2010).

    Tips on Writing Program Learning Outcomes

    Learning outcomes should itemize the most important goals that the program has defined for its students. Programs often have a large number of goals; learning outcomes reflect the most important of these goals.

    A well-written goal relates specifically to a program and to how the program's faculty envisions student learning. For example: "Students will write effectively" could apply to almost any academic program. In contrast, Kansas State University's English department's goals that students should be able to "research and write focused, convincing analytical essays in clear, grammatical prose" and "tailor writing for various audiences and purposes" indicates what that department sees as essential writing skills. Other programs, even other English departments, may focus on different aspects of writing. The point is to articulate your department's goals.

    1. Frame all learning goals around the desired outcome or end result of the learning, not around the process or means.

      Describe observable student behaviors; avoid fuzzy terms when possible. Many will find Bloom's Taxonomy (PDF) useful for choosing specific verbs. For example, it is difficult to observe whether a student "understands" or "appreciates" a concept but easy to judge whether he or she can "articulate" or "explain" one. Concrete verbs such as "define," "argue," and "create" are more helpful than vague verbs such as "know" or "understand" or passive verb phrases such as "is exposed to." Learning outcomes phrased with concrete verbs will help guide the choice of assessment methods. It is much easier to envision assessing whether a student can "define" something than whether he or she "appreciates" something.
    2. Be specific about what students who complete the program should be able to do.

      For example, "understand" is not only vague but may not appropriately represent what is expected of students. Understanding is a low-level cognitive outcome, and a program may want its students to be able to do more than "understand" a concept, such as "critique" it or "apply" it.
    3. Find a balance between specific and broad outcomes.
      • Too vague: "Students will demonstrate information literacy skills."
      • Too specific: "Students will be able to use the college's online services to retrieve information."
      • Better: "Students will be able to locate information and evaluate it critically for its validity and appropriateness.*
        *Example from Linda Suskie, Assessing Student Learning: A Common Sense Guide, Second Edition (San Francisco: Jossey-Bass, 2009).

    Goals should be challenging yet attainable. It is not necessary for every student to attain every single goal for a program to demonstrate success. In fact, such an outcome might indicate that the goals have been set too low. The University of Connecticut provides a useful overview of how to write program objectives and outcomes (PDF)

    Examples of Effectively Expressed Learning Goals

    Linda Suskie, a vice president of the Middle States Commision of Higher Education, provides examples of effectively expressed program learning goals.* In her examples, the goals are broad enough to capture significant, higher-order learning but are defined narrowly enough to be specific to the programs.

    EnglishPresent original interpretations of literary works in the context of existing research on these works
    Environmental ScienceCritically evaluate the effectiveness of agencies, organizations, and programs addressing environmental problems.
    TheaterUse voice, movement, and understanding of dramatic character and situation to affect an audience.
    Women's StudiesUse gender as an analytical category to critique cultural and social institutions.

    *Linda Suskie, Assessing Student Learning: A Common Sense Guide, Second Edition. (San Francisco: Jossey-Bass, 2009), page 132.

    One advantage of well-written goals is that they help guide the choice of assessment methods. It is easy to imagine how the goals stated above might be assessed: An English student could write a paper presenting original interpretations of literary works, or a theater student could demonstrate these skills in a performance.

    More Programs That Have Well-Expressed Learning Goals

    Choosing Assessment Methods

    Once a program has developed a set of student learning outcomes, faculty members can focus on these and ask, "How can we measure this outcome? What specific student behaviors, skills, or knowledge demonstrates that they have learned what we expected them to learn? How could we convince a skeptic that our teaching has achieved a successful outcome?"

    Programs should choose assessment goals that they themselves will find useful, that is, gather information on those aspects of student learning that the faculty members really want to know about. Assessment expert Barbara Walvoord advises, "Instead of focusing on compliance, focus on the information you need for wise action. Remember that when you do assessment. . . you are not trying to achieve the perfect research design; you are trying to gather enough data to provide a reasonable basis for action." (Barbara E. Walvoord, Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education, Second Edition, San Francisco: Jossey-Bass, 2010, page 5)

    Ultimately, programs will use multiple methods of assessment, some combination of direct and indirect and qualitative and quantitative. Direct evidence of student learning comes from the work produced by students, such as examinations, written papers, capstone projects, portfolios, graded performances, and exhibitions. These products demonstrate actual learning. Here is a list of many possible Examples of Direct Measures that can be used for assessment. 

    Indirect evidence can come from the perceptions of students and other stakeholders (such as employers) of how students have achieved a program's goals, perceptions reported through focus groups, surveys, and other research methods. Indirect evidence can also come from other indicators that imply the achievement of program learning outcomes — for example, job placements, graduate school placements, or aggregated grades. Here is a list of many possible Examples of Indirect Measures that can be used for assessment.  

    Ultimately, programs will use multiple methods of assessment, some combination of direct and indirect and qualitative and quantitative. Direct evidence of student learning comes from the work produced by students, such as examinations, written papers, capstone projects, portfolios, graded performances, and exhibitions. These products demonstrate actual learning. Indirect evidence can come from the perceptions of students and other stakeholders (such as employers) of how students have achieved a program's goals, perceptions reported through focus groups, surveys, and other research methods. Indirect evidence can also come from other indicators that imply the achievement of program learning outcomes — for example, job placements, graduate school placements, or aggregated grades.

    Here is a list of many possible sources of evidence of student learning (PDF) that can be used for assessment.

    Why Don't Grades Count as Assessment?

    One of the most common questions from faculty members is why assessment is necessary given that students receive grades. Doesn't a grade already demonstrate that a student has met the course learning goals?

    The answer is that, while a grade is a global indicator as to how one student has performed in a course, it does not directly indicate which learning goals the student has and has not met. A grade of "B" or a "C" may indicate that a student met most goals but not all. Even an "A" grade does not necessarily mean that a student achieved all the instructor's learning goals, since course assignments and tests may not be directly related to underlying learning goals. Also, grades are often partly based on behavior like course attendance and participation, which are not usually learning goals.

    Similarly, overall grade point averages in a program do not indicate whether specific program goals are being achieved. If students in a program have a cumulative GPA of 3.5, what does that tell us about the overall strengths and weaknesses of learning in that program? Program learning is cumulative, but students' performances in individual classes does not necessarily indicate that they are achieving the broader program goals. To evaluate student learning at the program level, it is necessary to examine specific areas of learning separately.


    Direct evidence of student learning is most often obtained by examining student work, examinations, essays, artistic creations, performances, class presentations, or something else. In order to consistently evaluate the learning demonstrated by students, scoring guides, called "rubrics," are frequently employed. A rubric is simply a list or chart that outlines the criteria or standards to be used to evaluate student work. Rubrics vary from simple checklists and numerical rating scales to the so-called "full rubric" used to describe a student's performance at each of several levels. Rubrics are useful for grading individual assignments, but, when charts for individual students are gathered and the information is collated, they can provide powerful evidence for student learning in a program.

    The American Association of Colleges and Universities recently completed a major project as part of its Liberal Education and America's Promise (LEAP) initiative. The Valid Assessment of Learning in Undergraduate Education (VALUE) project brought together more than 100 faculty members, assessment specialists, and other academic professionals to develop consistent rubrics for evaluating the "essential learning outcomes" of undergraduate liberal education as defined in the LEAP initiative. These rubrics, which are easily adapted to suit the needs of particular programs or institutions, include critical thinking, creative thinking, written communication, oral communication, civic engagement, teamwork, and other topics. For more information and to view and download the rubrics, go to www.aacu.org/value/.

    Internet searches will yield a wide variety of rubrics on various topics to use as models. Some examples are posted here:

    Carnegie Mellon University's Philosophy Paper (PDF); Discussion (PDF)

    University of Baltimore's Team Member Effectiveness (PDF); Graduate Analytical and Problem-Solving (PDF)

    University of Southern Illinois' Written and Oral MA exam (PDF)

    Washington State University's Critical and Integrative Thinking (PDF)

    Purdue University's Dissertation Proposal (PDF)

    The University of Connecticut Assessment Primer discusses in detail how to create rubrics.

    Assessment Resources

    The Middle States Commission on Higher Education, like all regional accreditation agencies, requires the assessment and documentation of student learning outcomes as a condition of accreditation. Some institutions began systematically assessing student learning even before this was an accreditation requirement, recognizing its usefulness for institutional progress. Now, nearly every educational institution in the United States is engaged in this process, and a wealth of resources have been developed, many of which are available on the Internet. Here are some recommended sources of information.

    General Resources

    North Carolina State University's Office of Planning and Analysis has put together a comprehensive list of internet resources; it is updated regularly.

    The National Institute for Learning Outcomes Assessment (NILOA) offers many resources, including a toolkit of information on common assessment tools with portfolios and curriculum maps. NILOA also offers this assessment primer, What New Faculty Need to Know about Assessment (PDF).

    In this article published in Inside Higher Education, David Scobey, executive dean of The New School for Public Engagement, reflects on the history and purposes of learning assessment and suggests a number of useful ways to assess learning in the humanities.

    Resources for Program-Level Assessment

    Many institutions have created assessment handbooks. The handbook developed by the University of Massachusetts at Amherst comprehensively discusses the steps of program assessment (PDF).

    Most academic professional associations provide learning assessment resources, which can be located through Internet searches.

    A few examples: