Alan Singer of Hostra University in New York wrote a critique of edTPA, a new assessment of student teachers, which was posted here. He called it “The Big Lie Behind High Stakes Testing of Student Teachers.”
A group of faculty at the City University of New York wrote to explain why they support edTPA, and it was posted here.
In a new article, Alan Singer writes that “edTPA is currently being implemented in a number of states through a partnership of SCALE (the Stanford University Center for Assessment, Learning, and Equity), the American Association of Colleges for Teacher Education (AACTE), and the publishing and testing mega-giant Pearson Education. About the same time, SCALE released “edTPA MYTHS and FACTS,” which responded to critics of edTPA and purports to set the “record straight.”
In his comments here, Singer points out that the supporters of edTPA are a small fraction of the CUNY faculty, and he challenges their defense of edTPA.

Alan Singer has prepared an excellent article with many good points. I just wanted to look at this one statement because we had experience with this …. when MA first started testing basic skills, essays were scored by English teachers and ETS came in to train our staff, faculty, representatives in 35 school districts. The inter-rater reliability was something that we needed to concentrate on carefully.
quoting Alan: “Among my major criticisms of edTPA are my questions about the qualifications of the evaluators and the inter-rater reliability of the evaluations.” Every time a new person entered the rating sessions, or a new rating session was scheduled with different/new raters we had to re-calculate the inter-rater reliability. It’s not something you do once and for all and then make a blanket statement. When the essay topics changed, when there were new raters brought in , inter-rater reliability was measured again for each session with each different topic/theme or group of raters. For another project with rating scales developed at Boston Children’s Hospital it took us two years to get a high inter-rater reliability with school staff and project staff. I can’t imagine that they have the stability with Pearson’s “raters” that would hole up under scrutiny.
LikeLike
Singers is right!
LikeLike
Oops SINGER…
LikeLike