Recommendations from the Field

The idea of principal empowerment is straightforward: Principals and teachers—not distant bureaucrats—are best equipped to decide how to serve the children with whom they work in their schools each day. Accountability is a simple notion, too: If principals have the freedom to make decisions, they also have the responsibility to demonstrate that their students are making progress. Schools Chancellor Joel Klein has used these concepts successfully to dismantle the dysfunctional bureaucracy that ran New York City schools for decades.

The difficulty comes in translating these simple ideas into practice. Are all principals, even brand new ones with little teaching or administrative experience, able to make sound decisions without significant guidance and supervision? And how do you measure progress, anyway?

Much like charter schools, data-driven accountability has become hotly politicized in American public education. In a speech late last year, Mayor Michael Bloomberg said accountability data will be used to justify shuttering 10 percent of the city's schools over the next four years. At the same time, some critics have opposed closing any schools and denounced standardized tests as indicative of nothing. Supporters and opponents each make valid points in their favor as they debate. Yet they also are prone to exaggeration. The truth is, many closures of schools and removals of principals in recent years have led to positive change, while others have not. The data that drove these and other decisions were sometimes strong, sometimes too limited.

In this report, we illuminate positive impacts as well as problematic aspects of principal empowerment and New York City's data-driven accountability system. Under its current structure, it is clear that some principals need more support than they are getting and the accountability system rests too heavily on the results of state test scores, which provide an unreliable measure of student growth.

The following are policy and practice recommendations developed with the guidance of the Center for New York City Affairs Schools Watch advisory board. These proposals build on strengths of the existing system, but call for important changes. These include varying the sources of information used for accountability; dramatically improving the breadth, rigor and usefulness of qualitative assessments of the city's schools and school leaders; and reducing the supercharged political use of test-score data that can have a deleterious impact on schools that are in fact headed in the right direction. Perhaps most important, these improved accountability data should be used and distributed in a less confusing way in order to more effectively improve the public's understanding of school quality. This would also provide families with the cleanly distilled information they need to make accurate decisions about their children's education.

Recommendation 1: The Department of Education should not oversimplify the strengths and weaknesses of each school by labeling each with a single "A" to "F" letter grade. At present, the annual Progress Reports give each school a single letter grade and a numerical ranking from 1 to 100. This simply doesn't accurately reflect each school's strengths and weaknesses, yet it plays a critical role in public perceptions of the school, decisions made about its future, and its ability to attract and hold staff. Students are never given a report card with a single grade to reflect their own work, but rather several grades for several subjects. Similarly, some schools are particularly good for children who are just learning English; others take in ninth graders who are several grades behind and graduate them on time; still others offer college-level classes to high-achieving kids while they are still in high school. A more useful annual report card would reflect that schools are strong at some things and weak in others.

Recommendation 2: The DOE should develop and rely on school report cards that award several different grades, reflecting different aspects of their work. In addition to giving schools a single letter grade and a numerical ranking, the current Progress Reports give schools three separate grades for school environment (based on parents' and students' perception of safety and atmosphere), school performance (achievement as measured by standardized tests and graduation rates) and student progress (an attempt to show year-to-year gains in achievement). (See "How to Read a School Progress Report.") Each piece of the Progress Report is useful, but there should be more to this mix. How well does the school rate on measures related to special needs children? How well is it doing on attendance improvement? Results from the DOE community surveys should be supplemented with data from qualitative in-school assessments by well trained reviewers. A more nuanced school report card with a set of perhaps six grades, presented with equal emphasis, would still provide accountability measures that school leaders can use. But it would also offer parents a more valuable tool for understanding aspects of the school. Parents can decide for themselves if, for example, a school with a "D" grade on environment and an "A" in progress is a good school or a bad one—and vice versa. And principals can more effectively focus on areas that need improvement rather than chasing the test scores that have overriding weight in the current single-grade system.

Recommendation 3: The annual school evaluation should give more weight to attendance. Regular attendance is crucial to children's long-term academic success, but a school's attendance rate counts for only 5 percent of its Progress Report score. Rates of teacher absenteeism are also significant, but are not included in the current Progress Report grade. Some schools with very low attendance receive high marks on their Progress Reports. This should be a red flag that something is amiss—a sign, perhaps, that test scores are inflated or other data on the Progress Report are inaccurate. Also, increased attendance is sometimes an early sign of school improvement; an effective principal may succeed in boosting attendance a year or two before he or she succeeds in increasing test scores. An increase in student attendance (and a decrease in teacher absenteeism) should be rewarded. The DOE should therefore weigh attendance by both teachers and students more heavily in its Progress Reports and should consider assigning bonuses based on improved attendance.

Recommendation 4: The DOE should reduce its over-reliance on purely statistical measures and increase the role of methodologically sound, qualitative assessment in its school evaluations. Even at their best, test scores and other quantitative data tell only part of the story of a school. However, the DOE has another tool, called a Quality Review, that relies on school visits by a superintendent or a consultant. Principals have complained that this instrument changes every year and that the varying skillfulness of the reviewers makes it unreliable. Many principals cite the initial Quality Reviews, performed in 2006 and 2007 by highly trained consultants from Cambridge, England, as the most valuable reviews they've had. Unfortunately this arrangement proved too expensive, officials say, and the quality assessment was brought in-house by DOE. Now, after several adjustments, the department has developed stronger, more coherent guidelines defining what its reviewers should look for. The new reviews look at what goes on in classrooms, the quality of student work, the coherence of the curriculum and the ability of teachers to work together as a team. The guidelines are strong—but the methodology must be tightened and made common across schools. Training of reviewers must be intensive and their work must be closely monitored for adherence to the methods and the rubric. Finally, when this work is accomplished, the results of these Quality Reviews should carry much greater weight in the annual scored evaluation of each school.

Recommendation 5: The DOE's Children First Network structure should formally recognize that some principals need greater supervision, and provide it. While the notion of empowerment is a good one, different principals need different levels of support and, in some cases, closer oversight. John Garvey, recently retired as the City University of New York liaison to the public schools, says the DOE has entrusted the well-being of too many of the city's public school children to inexperienced principals. "Some of these could be effective principals, but they are being asked to select support systems without necessarily knowing what they need to learn," Garvey says. The DOE needs a mechanism to offer more guidance to principals who need it, steering them into networks with an appropriate focus on close, supervisory engagement with a coach.

Recommendation 6: The DOE should place experienced principals in the toughest schools. The Leadership Academy, the DOE's fast-track training program for aspiring principals, has graduated many successful new leaders for New York City's public schools. Academy graduates tend to be young, and many have minimal teaching and administrative experience. Yet they are often placed in the most challenging schools, those with very low-achieving students and teachers and parents who may be hostile to anyone seen as an outsider. The DOE should send its most effective, seasoned principals to schools like these. In 2008, the DOE created the position of executive principal and gave experienced principals an annual bonus of $25,000 if they agreed to lead struggling schools for three years. Executive principals have been successful in a number of schools. At the very least, more of these principals could serve as formidable supervisory coaches for less-experienced principals.

Recommendation 7: The DOE should form an advisory board for psychometrics. The state Education Department has enlisted nationally recognized experts on testing to review its use of tests. These experts, including Howard Everson of the City University of New York and Daniel Koretz of the Harvard Graduate School of Education, are part of the state Technical Advisory Group. The city needs to enlist experts such as these to inform and evaluate its use of standardized tests. The state provides the reading and math tests for grades three to eight and the Regents exams for high school students. These tests were designed to measure proficiency - that is, whether a child meets state standards. The city wants to measure year-to-year progress, but the existing tests are an imprecise tool for such a purpose. Enlisting the help of academics who specialize in the highly specialized and complex world of educational testing will help make the city's accountability system stronger and more reliable.

Recommendation 8: The DOE should not close a school until it has something better to put in its place. The DOE's plans to close schools often stir up much anger, in part because parents and other community members don't have confidence that better schools will replace them. The DOE could defuse some of this anger if it better explained to parents plans for new schools at the same time it announced school closings. Under state law, the DOE is supposed to present detailed "education impact statements" when it announces school closings. A state Supreme Court judge recently ruled that the DOE failed to do so when, for example, it vowed to close a number of high schools without making adequate plans for special programs that would be disrupted, such as a child care center that enables teen mothers to stay in school and graduate. The public has a right to know what will happen to children who are displaced by school closings.

Recommendation 9: The mayor, the chancellor and the city's opinion leaders shouldn't oversimplify the meaning of accountability data in their public statements. Data-driven accountability has become the latest school reform to be hyper-politicized. Of course outcomes matter, and every child deserves a school that is held to the highest standards. But those standards are emphatically not described in full by the overly simplistic statistical measures that routinely garner the attention of the news media and political leaders. Test scores and Progress Report grades are insufficient measures of school success; every teacher and school administrator knows this, but they are constrained and incentivized by the rules that political leaders impose upon them. New York can do better than this.