I’m an elementary school teacher with a crummy evaluation. It’s a testament to the families at my school that I didn't spend my February vacation responding to angry or terrified emails from my students’ parents.
Along with 18,000 other fourth- through eighth-grade teachers in New York City's public schools, my ranking based on student test scores was made public last month. While my English scores said I was more effective than the majority of teachers, my math scores were terrible.
Like thousands of fellow teachers, I know the scores are inaccurate. The “teacher data reports,” as they’re known, are flawed and humiliating.
They also destroy good teaching practices.
Many critics before me have pointed out the reports’ many statistical flaws. These are obvious in my report, which shows extremely high margins of error, saying my score could actually be anywhere between 15 and 90. The sample sizes are low, in my case ranging from 13 to 24. My class rosters are incorrect. In the only year for which I received a report, I taught a fourth- and fifth-grade mixed-age class, but my report omits my fifth-graders’ English scores.
My 2011 report, yet to be published, contains more flaws. When I asked the principal why two of my strongest readers were omitted from my English roster, I discovered that students who are absent for any portion of the test are omitted from reports, even when they make up the test in a few days.
This year, I teach an integrated class along with a special educator, which complicates the reporting even more. Co-teaching arrangements are designed for students to have access to the expertise of two teachers to support their learning. But the data reports are now pushing us to focus only on the students whose names appear next to ours.
In an attempt to correct mistakes in teacher rosters, my principal asked me to work with my co-teacher to report who was the primary teacher who worked with a given student for both math and English.
At first this seemed like a fair way of reporting, but once we started to break down our roster, we were stumped. Where should we place the student who worked in a small group with my co-teacher to hone some basic arithmetic skills only to graduate to my larger lessons a few months later? What about the struggling reader who has failed to make progress in my co-teacher’s reading group, but is making steady progress through my writing lessons?
Such a system discourages teachers from working with struggling students. Many times this year, I've advocated mainstreaming special education students in subjects where they’ve been able to progress toward grade level. But with my reports made public and tied to the advancement of my career, I can't help but wonder if I’m doing myself a disservice.
The data reports push teachers toward questionable educational practices. When our contract extended the work day an additional 37.5 minutes in 2006, the time was meant to serve as extra help for struggling students. At my school, teachers sign up to work with students in a particular area of need. They’re assigned students from multiple classes based on who needs that kind of help. Previously, teachers could give the same attention to all the students in their extended-day groups, but the evaluations create a disincentive to work closely with students not on one’s own roster.
The school where I work is known for our progressive, child-centered curriculum and practices. Our students engage in inquiry based learning, meaning our students are encouraged to ask questions and do their own research, with a focus on social studies, the arts, and independent projects. None of these elements factor into the data reports that administrators, the media, education privatizers, and parents are using to judge the quality of my teaching.
Teachers at my school are overwhelmingly opposed to high-stakes testing as a distraction from a more meaningful curriculum that promotes critical thinking.
But in a climate that bases a teacher’s worth on her students’ test scores, with a public trial for everyone to view and dissect, it’s hard to imagine that we could remain unaffected by the narrowing effects of the tests.
This article was originally published by Labor Notes.