In Support of a Performance Assessment of Teaching
June, 2014
Beverly Falk, Professor and Director, Graduate Program in Early Childhood Education, The City College of New York
Jeanne Angus, Assistant Professor and Program Director, Graduate Program in Special Education, Brooklyn College
Greg Borman, Lecturer, Secondary Science Education, The City College of New York
Nancy Cardwell, Assistant Professor, Graduate Program in Early Childhood Education, The City College of New York
Joni Kolman, Assistant Professor, Department of Teaching, Learning, and Culture, The City College of New York
Geraldine Faria, Assistant Dean, School of Education, Brooklyn College
Christy Folsom, Associate Professor, Childhood Education, Lehman College
Nancy Martin, Adjunct instructor, Childhood Science Education, Brooklyn College
Andrew Ratner, Assistant Professor, Secondary English Language Arts Education, The City College of New York
Deborah Shanley, Professor, Special Education/English, Secondary Education and Dean, School of Education, Brooklyn College
Jacqueline D. Shannon, Associate Professor and Chair , Department of Early Childhood Education/Art Education , Brooklyn College
Beverly Smith, Associate Professor, Secondary Mathematics Education, The City College of New York
Christina Taharally, Associate Professor and Director, Graduate Programs in Early Childhood Education, Hunter College
The media and the blogosphere have been filled as of late with discussions about teacher education. Think tanks, states, and the federal government have questioned the efficacy of teacher preparation programs and are proposing accountability measures for them that resemble the high stakes testing in p-12 schools. Many have responded to these problematic policies, which emphasize targets and sanctions rather than supports to improve, with critiques of the dysfunctional consequences they generate: an over-emphasis on tests that narrow the curriculum and the use of value-added measures (such as students’ test scores to evaluate teachers and graduates of teacher education programs) that do not account for all of the complex factors influencing learning. Critics rightly point to these policies as creating disincentives to teach students who traditionally do not score well on tests (those who are poor, new immigrants, who are English language learners, or who have special needs).
Ironically, however, some who are reacting to these negative effects of the test and punish approach are including in their attack an initiative specifically designed to push back against it. They target a performance assessment for teachers designed by the profession for the profession – the edTPA – which calls on prospective teachers to demonstrate through performance (not multiple choice tests) that they have professionally-agreed upon skills and knowledge to enter a classroom ready to teach. The opposers of edTPA make inaccurate claims about it – that it is tied to a high-stakes testing regime and outsourcing evaluations to a private corporation – Pearson; that it demands a single approach to teaching and teacher education; that it usurps academic freedom and faculty control of curriculum; and that it has no research base to evaluate good teaching. Alan Singer’s recent blog post, an example of this opposition, also
claims that edTPA “distracts student teachers from the learning they must do on how to connect ideas to young people and undermines their preparation as teachers.”
We, teacher educators who have used the edTPA, write here to offer a different perspective – to share how it has supported our teaching, our program development, and our students’ learning.
Who we are:
We are teacher educators from the City University of New York, a university comprised of many campuses across NYC that serve a socioeconomically, culturally, racially, and linguistically diverse population of students. We are advocates for equity and access in education. We support culturally-responsive teaching and assessment practices that focus on deep understanding, critical thinking, and analysis of complex issues. We believe assessment should examine what learners know and can do in authentic contexts and that assessment results should be used to support and improve, not target and sanction. Additionally, we support national efforts to make educator preparation more clinically based so that graduates of educator preparation programs are supported in the context of real-life teaching to combine theoretical with practical knowledge so that they can enter their classrooms ready for the incredibly difficult realities of teaching.
Because of these values we welcome the teacher performance assessment (edTPA) –
a performance assessment of teaching developed by hundreds of teachers and teacher educators across the country, in a process led by Stanford University’s Center for Assessment, Learning, and Equity (SCALE), with support from the American Association of Colleges for Teacher Education (AACTE). Based on 25 years of research and practice, this assessment is currently being used in over 500 institutions in 34 states across the United States. In New York State, it is newly required for certification. As result of our first years of experience using it, we find the edTPA to be a tool for our improvement and a valuable guide to effective teaching.
edTPA: A performance assessment of commonly-agreed upon foundational skills
For those who are not familiar with edTPA, here is a brief outline of what it requires. It is a performance assessment that consists of 3 tasks that call on prospective teachers to demonstrate and explain their ability to carry out universally-accepted essentials of good teaching:
• 1) plan 3-5 inter-related learning experiences, taking into consideration the cultural, linguistic, and learning backgrounds of students;
• 2) teach what they have planned, demonstrating through video a segment of a learning experience that is accompanied by a reflective commentary; and
• 3) assess students’ work – examining artifacts of students’ work, including students with learning and language differences needs, for the purpose of using what has been learned from the students’ work to inform future teaching.
Alan Singer and other opponents of edTPA claim that the developers of edTPA (which he inaccurately refers to as including Pearson and New York State) “are trying to sell the public that you learn to teach, not by teaching, but by writing about it. They also want you to believe that they have perfected a magical algorithm that allows them to quickly, easily, and cheaply assess the writing package and accompanying video and instantly determine who is qualified to teach our children.”
Contrary to these claims, we do not see the edTPA as simply a tedious writing exercise.
Indeed the requirements of the edTPA are teaching: planning a curriculum, teaching it for several days, adjusting the plans based on what students are learning, assigning and evaluating student work to shape future teaching is, in fact, what teachers do. In addition to actually teaching, we have experienced edTPA as a useful opportunity to reflect on teaching strategies and students’ needs. We believe, as do the many representatives of professional associations and experienced educators who participated in the development of edTPA, that an essential part of teaching professionalism is being able to explain what we do and why we do it. Only educators who can articulate and defend their practices can uphold the professionalism needed to strengthen our field. Furthermore, we do not agree with the claim that the edTPA demands only one way to demonstrate what is good teaching. The lessons candidates plan are developed by them; the materials they use are chosen by them; the strategies they employ are their choice. The assessment offers a frame that has room for many different approaches. We do not think it takes the artistry out of teaching but instead, by sharpening the focus of our preparation on commonly-agreed upon foundational skills, it enables not only the artistry but also the joy of teaching to take place.
edTPA is controlled and scored by the profession, not the Pearson corporation
Contrary to the claims of Alan Singer and others, the edTPA is not a Pearson assessment. The facts are that Pearson is an operational partner (much like the publisher of a text), responsible for creating and managing the online platform that collects portfolios and delivers them to the teachers and teacher educators who score them. This capacity enables the assessment to be used on a national basis.
Neither is edTPA scored by Pearson. Singer inaccurately claims that “student teachers are not being evaluated by trained field supervisors or cooperating teachers, but by temporary evaluators of questionable qualifications ….who are hired by Pearson.” This is not true. The facts are that edTPA is scored by experienced teachers and teacher educators who are rigorously selected through a process developed by the national consortium of educators who designed edTPA and then taught to examine prospective teachers’ work in relation to commonly-agreed-upon descriptors of exemplary practice (also a process designed by the edTPA consortium).
In our experience, using the edTPA has not taken the development and evaluation of our students out of the hands of our faculty, trained field supervisors, or cooperating teachers. Rather, it has been a helpful complement to our coursework and to the feedback we offer in clinical experiences. In fact, as a result of working with edTPA, not only have we been prompted to develop more coherence in our courses and across our programs, but we have also found that edTPA has prompted our prospective teachers to demonstrate a more intentional and reflective approach to their teaching. Overall, we believe that the edTPA is helping us to better prepare our graduates.
Prospective teachers’ perspectives
The perspectives of prospective teachers who have taken edTPA confirm what we have experienced. In a recent state pilot of edTPA, for example, researchers found that 96% of teacher candidates reported that the edTPA was a positive influence on their learning, pointing especially to how it made them more self-aware and focused them on student learning. More than 90% of teacher educators reported the experience of supporting the edTPA enabled them to reflect on and improve their program design and instruction. For more evidence about the effects of this assessment on candidate learning and teacher education improvement, see http://edtpa.aacte.org/resources.
These research findings are reflected in the remarks of Peter Turner, a graduate of The City College of New York’s School of Education, who noted in a recent interview:
[Although I had student taught already], for me [edTPA] was the first time that I considered every aspect of what it means to be an effective teacher…. The edTPA is a good test because it scaffolds every aspect of what a teacher needs to do. It was a wonderfully educative experience for me.
— Peter Turner, City College of New York
Lehman College graduate, Roshawna Cooper, adds:
Looking at my lesson plans when I was doing edTPA and looking at my older lesson plans when I did methods courses without edTPA, I missed a whole chunk. There was a lot missing. I am so much more mindful of my students now when I am teaching and the effectiveness of my different lessons. I am more mindful about how to build on each lesson to support my students’ skills. And that is really big. ( See http://edtpa.aacte.org/resources/candidate-to-candidate-reflections-on-taking-edtpa)
Safe to practice/ready to teach: Accountability by the profession for the profession
All professions have external certification exams and a commonly-agreed upon set of foundational knowledge. In fact, professions that are responsible for the safety and well-being of humans all require that their certification processes demonstrate not only that new entrants to the profession have knowledge and skills but that they know how to apply these knowledge and skills and are safe to practice with those entrusted to their care. We believe that the edTPA, by asking new teachers to demonstrate that they know how to teach before they are given the privilege of taking responsibility for children’s lives, is a genuine and valid measure of our work. Because edTPA’s rich descriptions and analyses of teaching are aligned with critical commonly-agreed upon elements of effective practice while allowing for individuality and flexibility in content and style, we believe it serves as a useful tool and guide for teaching and is a positive step forward for us as a profession.
Although there is no such thing as a perfect assessment, especially for something as complex as teaching, we believe that edTPA vastly improves the process by which teachers are certified in New York State. It is a mechanism for us as teacher educators to demonstrate the outcomes of our work and to hold ourselves, as a profession, accountable for what we do. It stands as a viable genuine accountability measure for graduates of teacher education as opposed to sole reliance on standardized tests. While its implementation has posed challenges and calls out for changes in “business as usual” in teacher education, we believe these changes are well worth the while because edTPA reflects our aspirations, celebrating what it means to be a teacher and putting into practice the educative aspect of what high quality assessment should be.
Reblogged this on jsheelmusic and commented:
I feel this is a well written blog entry. I agree, based upon my experience working with edTPA, that the assessment if a great opportunity to foster true learning and growth for both student and faculty.
This week I received an invitation to serve as an edTPA scorer for secondary math education. I am not certified in mathematics and have never been a secondary math teacher. edTPA is a standardized assessment scored by outsourced raters whose qualifications are varied and dubious. It has not been widely embraced by teacher educators (it is being used because it is mandated); if it had been, promotional letters such as this one would be unnecessary.
Your comment, Learning First, could not have made the point, more clearly.
learningfirst,
On the application for edTPA scoring (which I am assuming you filled out), you choose your areas of expertise. Did you check mathematics?
Over labor Day weekend 2013, AACTE endorsed CAEP, a group that wants to bring VAM into edTPA:
I, too, have worked with the edTPA with students, and debriefed them extensively. Not a single one of them found it more valuable than what we were already doing (the Oregon Work Sample, which also has planning and assessment components, but is scored internally, not in the high-stakes manner of edTPA).
None of them would have passed either, based on the proposed cut score, despite considerable support from faculty. This tells me that it will take substantial refocusing of the entire program to, yes, teach to the test, which has been the experience in other states. There’s no way to maintain the same focus on culturally responsive, socially engaged teaching, when there’s a high-stakes assessment that doesn’t care about those. This, I believe, is part of why the National Association for Multicultural Education (NAME) came out against edTPA.
Finally, there’s no question that it’s primarily a test of students’ ability to write about their teaching. Despite two short segments of video (and who would want their teaching ability judged on 20 minutes of video!), edTPA is mainly dozens of pages of writing about teaching – and students are asked to both show a wide range of skills and draw large assessment conclusions based on as few as 3 lessons. Is the edTPA related to actual classroom skills? Sure. Is what it shows worth transforming our programs and subjecting students to the cost and stress of a high-stakes exam? No.
LearningFirst, I am curious of your position – do you work with secondary math students at all? Do you help prepare them (even if you were not trained in it or never taught it?) That could be a reason you were invited…or it may have been human error – why jump to conclusions that this one example means the scorers are dubious. I have been a scorer, have taught HS math, have a math degree, etc.
I often wonder why programs are so afraid to have an assessment objectively scored externally as opposed to internally?
Finally – I would disagree that this is mainly a writing exercise? I would imagine that in all of the courses that candidates have to take they have to do some type of writing (papers, journals, etc)…often times I wonder how applicable those assignments really are to teaching. The writing in edTPA allows the candidate to explain their reasoning in their selection of actions, plans, etc. It is something that is not seen enough in our profession. Furthermore, being aligned with other frameworks such as Danielson’s FFT, it helps prepare many of them for their teacher observation in the future.
“I often wonder why programs are so afraid to have an assessment objectively scored externally as opposed to internally?”
Because those standardized tests are not scored “objectively”. That is a fiction proposed as fact by those making the jack off the process. (maybe there’s one too many “thes” in that statement)
What is your job, John?
I work with teacher education candidates at a IHE in secondary math. Our program uses the edTPA (even though it is not mandated by the state). While our state does not mandate national scoring, we do send about 10% of our candidate work for national scoring. In addition, we had nearly 250 edTPA portfolios locally evaluated by mentor teachers, local NBCT teachers, faculty, doctoral students, etc. They received training on this process (not as detailed as national scoring but still rather detailed, including having to evaluate a sample from last year).
Some may argue that this is having local scoring and that we are scoring our own…others may think that we are “outsourcing” our work to our mentor teachers. In fact, many who have evaluated our edTPA locally have claimed that it has helped them improve their own teaching. That is something that no one has brought up yet – just maybe having outside evaluators score edTPA at the national level we could also be improving teaching by the evaluators. Maybe the evaluator who scores an edTPA realizes that he/she really doesn’t ask lots of higher order questions in his/her lessons and aims to do that. Maybe he/she realizes that in lesson planning too many assumptions are made about prior academic knowledge and that he/she needs to do a better job with this…
jlsteach,
What is IHE? I’m AI so help me.
Thanks,
Duane
Sorry, i work with a higher education teacher prep program (so at the collegiate level)
But what does the acronym stand for?
Institutions of Higher Education (IHE)
A brief response to your Wilson article (I have not had time to read it thoroughly), but looking at a couple of final points – would you say that teachers should not be assessed at all? Yes, there will always be some type of “power struggle” with those that create an assessment vs. those that are taking an assessment…Even an assessment like the driving test that I had to take when I was 16 to get my license. Yes, there is a power play involved – including if the person assessing me is a nice person or someone who is “out to get me” and never lets someone pass the first time. I know that many of you have stated that you can’t just say “trust us” but at some point I chose to put some faith in my fellow human kind that…Would Wilson argue that everyone should be allowed to drive and that there shouldn’t be a test for that?
one could argue that this happens at any level. But part of that is how you view assessment. I chose to view edTPA as an assessment that is not created for a power struggle but rather as an assessment that is created by educators for educators.
I will concede that in many places the implementation of edTPA has not been ideal (similar to that of the Common Core standards – I don’t want to get folks off track). And I also agree that just like Common Core, where some organizations have finally stated that the tests and their results should not be consequential…But don’t confuse the assessment with the implementation process.
I have never said that teachers shouldn’t be evaluated/assessed. I just ask that the process be one of open, collegial, and equitable standing of both parties and not an “I’m the boss your the underling” environment which is what any standardized test/evaluation rubric is. And I’m not confusing the “assessment with the implementation process” as they are both problematic.
I ask you: How can a 30ish administrator who has never learned a second language and who has spent maybe 3-5 years as a teacher, one who has no outside managerial experience, who maybe took a year or two of French or Spanish in high school, who observes me for a about .33% of the time that I teach be in a position (other than by authority) to judge what goes on in my Spanish classroom? Even worse what kind of rubric based assessment on such a miniscule amount of time observed be anywhere near accurate?
Again, the edTPA suffers all the errors and invalidities that Wilson identifies. Read his work (you’ll be one of the few of the hundreds that I’ve personally challenge and one of the few I have challenged on blogs like this). If you need help getting through it feel free to contact me personally at dswacker@centurytel.net and I will go through it page by page if necessary.
I will take on your task when I get the chance…however, here is one way that edTPA is different from the scenario you describe…instead of an administrator who knows nothing about the subject area (I had a similar situation – I had a principal hired through an alt-certification program who taught SS in the past evaluate me in math at a STEM-focused school!), edTPA has evaluators that are subject specific…
You have stated nothing that would assuage my concern that none, yes none, of the edTPA process is “objective”. See my post on Wilson and then go and read his work to understand why this is so.
Does anyone know if there are samples available to the public of entire teacher portfolios, with the corresponding evaluations? How is anyone supposed to evaluate what edTPA is doing without seeing samples of what it is doing? Without such samples, how can any judgment be made? “Trust us” is not sufficient.
Bob – I know that there are samples that faculty can refer to (available for training of local evaluators)…but I wonder – are samples provided of Praxis? of other assessments? If one wanted to, you can go to the AACTE site and get access to a sample handbook that demonstrates sample rubrics, etc…
Thanks, John. I am intrigued. Suspicious, but intrigued. I have believed that the bar should be high for entry into teaching, but I have a visceral distaste for regimentation and standardization. To imagine that there is any one way in which to be a great teacher seems to me as crazy as thinking that there is some one way to be a great painter or philosopher. I recently reviewed the sample questions for the teacher certification exams here in Florida, prepared and administered by Pearson. It was easy enough to see what the correct answer was supposed to be to the various questions, but the whole of the exam seemed practically a love letter to standards-and-summative-testing-based “accountability,” which is not surprising given the source. And it is precisely this sort of ideology creep that I worry about in initiatives of this kind. I am not terribly crazy about establishing a ministry of truth in an area as fundamental as this. Ours is a profession that deals in cultural transmission–transmission of ideas. At what point do we cross the line and become China during the Cultural Revolution, a country in which there is an official ideology for teachers to which all must subscribe, unquestioningly? Does this seem a crazy concern to you? I don’t think that it is. I think of myself as a level-headed and practical person but as one who knows some history. “There’s no bullet list like Stalin’s bullet list,” wrote Edward Tufte. Like him, I look upon most people’s bullet lists with a cold eye.
Not readily, because there are candidate and parent consents involved. Generally we are able to ask for restricted permission to share, say for internal professional development purposes. I hope you can understand that with images of students and their work samples we can’t just blast candidate portfolios for public consumption.
I shared the feeling, though, that without seeing what it all looks like, one might be skeptical. I certainly had my doubts and wanted to see what it all looked like. Is there any hope for this community to take a thoughtful colleague’s word for it? Doubter I was, but now having seen the work, doubter no more. I’m 20 years in serious teacher preparation efforts at a comprehensive program. This is one of the most promising advances I’ve seen. Clarity and precision with the expectations of effective instructional practices based on knowledge of students.
Well, Amee, I am intrigued. And you have no fear of this system becoming corrupted over time? You think that it is entirely transparent and that it respects AND ALWAYS WILL RESPECT, DESPITE ITS CENTRALIZATION, the diversity of teaching styles and gifts?
Again, “Trust us” is hardly sufficient.
And I now have a first-hand distrust of corporatization/centralization because I had been crafting a thoughtful response on my I-friggin-pad and it evaporated. Will try again later with my commentary about the idea that it’s centralized which = bad.
The development and scoring is executed by those of us directly connected with candidate support and teacher evaluation focusing on the teaching practice I think we all want to see more of in classrooms. Are we building towards a national vision for effective instructional practice? Is that a bad thing? We already have national visions for architects’, nurses’, and accountants’ beginning practice. We seem to appreciate, or not even notice, common standards for our buildings, health care, and finances. What is it about teaching that we want to hold to a different approach to standards of (beginning!) practice?
I am going to sound really ignorant here but who creates the standards for nurses, architects, lawyers? I’m betting that somewhere in there is a national organization which derives some authority from it’s members who compose the various committees and agree to abide by some set of standards derived from within that organization. For the life of me, I cannot ever remember being approached by a national teaching organization. I’ll be damned if I am going to roll over to have some self appointed expert tell me what in their infinite wisdom they have decided are the “best practices” of teaching without the consent of the teaching profession.
Blur faces for Pete’s sake! It shouldn’t be so difficult to present exemplars of what edTPA is asking.
And no, I won’t take your word for it. I expect documentation for something that has such a significant impact on the lives of so many people.
Please do not forget that this is the result of the work of Linda Darling Hammond, the supposed “supporter” of teachers and teaching as a profession. Don’t be fooled anymore! She is one of the prime authors of this so-called “professionalization” movement that has willingly gotten into bed with the reformers as they siphon in their blood money.
When these professors subject themselves to an equally humiliating and unscientific process and they start losing their jobs, their homes, and are unable to support their families then I will listen to what they say.
I voluntarily underwent such a process when I became a National Board of Professional Teaching Standards certified teacher. It was a valuable experience to me. But now that they are going to tie the snake oil VAM to it, it becomes a worthless, punitive exercise in sadism.
I don’t think they mean harm, Chris, even if they have instilled it. I have been thinking about this a lot in terms of NC leadership. I think at some point, with teaching, there is simply no way to further professionalize. . .it just is what it is. But they feel that it needs to be elevated to lots of hoops to jump through and lots of stuff and so forth. It’s a generational thing, I think. They are coming out of an era of this type thinking (measure, publish, etc)—-and it is up to those of us from younger generations to turn it around.
Joanna, that argument, that the ends justify the means and that good intentions pave the road to hell have led to much evil in this world. You seem to have the nurturing, forgiving, and encouraging heart of an optimistic teacher. While that serves you well in the classroom and in life, in general, it will make you a casualty in the war on teachers and public education.
I don’t know it Darling-Hammond meant harm. I would hope she didn’t. But the fact is that her designs and participation have hurt a hell of a lot of real, live people and negatively affected their children, their spouses, and their economic prospects, often for life.
She, meanwhile, keeps making more and more money while she hobnobs which those who have publicly stated, over and over, that they do mean us harm, lending them legitimacy and cachet. Is that a moral thing to do? Is that ethical?
Are we comfortable with letting Darling-Hammond and Weingarten and Van Roekel attend lush cocktail parties and extravaganzas while raking in lots of money and offering cover to our open enemies?
At some point in war you have to identify your enemy and plan for your defense or you will simply become one of many, many casualties. I don’t plan on being a martyr for niceness and getting along with everyone. Sorry!
I’m not justifying what they are doing, or condoning it, I just think they are from a different era and mindset that is susceptible to that type thinking. I try to remember that a person with no vice is a person with no virtue.
Anger is not becoming to me. And I am using my kind attitude to make progress in North Carolina on behalf of the children. So long as you are using your anger to do the same in Florida, then I don’t see a huge difference.
I don’t think anyone should ever set out to get rich in the name of education. If they are, shame on them. But we have to move forward to save the schools. A kind disposition that looks beyond the anger I certainly feel is the only way for me to maintain a clear head. And I do not intend to be a casualty, either, Chris.
Go ahead and be nice, Joanna. I can’t help myself either. Just don’t assume that everyone else is. That was my mistake.
Chris, you are exactly right. Thank you for so eloquently stating what so many of us think.
Thank you for this account to balance the dialogue. Our experiences with full implementation, including official scoring of 667 portfolios, matches your report. Deeper professional dialogue, clearer expectations of effective beginning teaching, and an increased hiring rate as candidates are better able to articulate their use of assessment results to inform their teaching.
Some call it a “high stakes assessment.” I like to add “for a high stakes profession.”
It is, indeed, a “high-stakes profession.” This is why I am very suspicious of attempts to centralize and regiment it and think that we should all share that suspicion.
The claims you make about the benefits to your students of the edTPA may say more about your pre-edTPA program than about the benefits of the edTPA. My experience is the after the intro of the edTPA (Pact in those days) the deeper conversations gave way to desperate conversations about how to implement the edTPA. And the students got clearer expectations–about how and what they were scripted to teach. Not about what knowledge is of most worth. Do you have research connecting increased hiring rates to implementation of the edTPA?
Amee (and John S.),
Your little EdTPA, suffers all the same errors in epistemological and ontological underpinnings, the same errors in construction and the same errors in interpretation that render the whole process INVALID, ILLOGICAL AND UNETHICAL as shown by Noel Wilson in “Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700
Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine. (updated 6/24/13 per Wilson email)
1. A quality cannot be quantified. Quantity is a sub-category of quality. It is illogical to judge/assess a whole category by only a part (sub-category) of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as one dimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing we are lacking much information about said interactions.
2. A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
3. Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
4. Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other word all the logical errors involved in the process render any conclusions invalid.
5. The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. As a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
6. Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
7. And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it measures “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self evident consequences. And so the circle is complete.”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
” . . .candidates are better able to articulate their use of assessment results to inform their teaching.”
And that sets off the alarm bells. You speak above of standards for building and highways and shows me that you are of the mindset that children are data points to be measured and manipulated. NO!
You also say you have done this work for 20 years? How can you possibly know what good teaxhing is if you have not been in a classroom for 2 decades? You never worked under the hideous NCLB, Reading First, or RTtT and VAM.
Your responses are sprinkled with enough of the reformy jargon that you come across to me as someone who has bought into the whole bogus data/testing/continuous improvement nonsense.
I realize you must prepare your students to teach in the schools we have under the conditions that exist but for heaven’s sake don’t let them go without knowledge of what teaching was like before the neoliberal corporate technocrat takeover removed all joy, laugher, fun, creativity, and humanity from public education.
And don’t claim that turning all new teachers into data crunching automatons is a good thing to me. It is not and never will be.
I’m sure you are dedicated, sincere, and a nice person but this test is creepy and a corporate tool with no system in place for public review, evaluation, and modification. That alone is Orwellian enough for a thumbs down from me.
Chris…I can’t speak to Amee’s experience in schools, but let me share you mine. I graduated from a secondary education master’s program in 1999. Our “portfolio” (yes, that was the in-thing) was a huge notebook filled with lesson plans, teaching reflections, education philosophies, etc. In essence, it was a huge scrapbook…I easily passed my portfolio and was certified to be a teacher (on another note, I was part of the first group in my state to take content area tests for certification)…
I taught for a few years in private schools and for five years in Washington DC Public Schools (right around the time that Michelle Rhee came in and I left as IMPACT, their version of the teacher evaluation measures) came on board. So I have lived in the classroom with many of the things you are describing.
I worked with colleagues who were of the mindset (via TFA) where data was king, where data walls measured students based on their numbers, etc. And even as a math teacher (who seemingly would love numbers) I loathed such things…
To me, edTPA is different from many of those:
– it differs from teacher assessments like IMPACT where it allows the candidate to set a context for their teaching
– it really is educative….at the local level, we use the results to help our candidates as they prepare for induction, etc.
There are many flaws about things like NCLB (which, let us not forget, was signed into law by President Bush but was advocated for by Ted Kennedy – not exactly a conservative person!)….The reality is that when our framers set up our constitution they left education to the states….which unfortunately has led to large amounts of inequality across this nation – inequality in terms of what a person has to do in order to teach, what a student needs to do in order to graduate, etc..
edTPA works to help ease this inequality…those of you who are not in favor of some type of common standard – does this mean that you are ok with the inequality that still exists across the nation?
jlsteach, thank you for the thoughtful reply. As an NBCT I am familiar with the learning that can take place from producing a reflective teaching portfolio.
My concerns are how will the edTPA be controlled? Who has final say about what qualifies as “good teaching”? How does it protect the states rights for determining their educational needs and outcomes?
The “if you don’t support edTPA you must oppose equality” is just a low blow that is unworthy of your response here.
I can seek equality other ways. And as a 1st grade teacher I know and teach my students that “same” is not always “equal”. I’m surprised that the cheerleaders of edTPA don’t acknowledge that more because it’s one of the big sticking points to me.
The children I taught in the South Bronx did not need the same things as the children I taught in the Upper West Side of Manhattan. Not at all. The students I taught this year did not need the same things that the students I taught last year. Not at all.
Last year I had an intentional class of students who failed Kindergarten but were “placed” in 1st grade. I was able to bring 15 of the 16 kids to grade level because I had the freedom to modify how and what I taught them. This year I had the “behaviors” class where I took all the Kindergarten students who had multiple referrals and suspensions. I was able to bring 18 of the 21 to grade level by the end of the year using entirely different teaching methods and materials.
Since I am allowed to see the edTPA or to examine even the rubrics I don’t know what it claims as “good teaching” and one of my concerns is that once students are taught that “this one way” is good teaching, especially if it is tied closely to the CCSS and current neoliberal/business fads in education such as the worship of “data” to inform instruction (a quote from above) then how will these teachers learn to do what I do in modifying my instruction every year? How will they gain the experience or even think about trying something new and different if they are formed with one, true teaching model? If that is not what edTPA does then why do its defenders have such a hard time explaining that and convincing the skeptics?
Some of the problems with CCSS is the secrecy, the lack of transparency, the inability to modify or change anything, and the lack of general public input in their formation and adoption. If edTPA follows that same course, as Pearson seems to demand, then it will have similar problems, don’t you think? That’s the problem with profiting off of something you claim is a general, public, national good. It must be kept hidden and secret for proprietary reasons and hence it ceases to be general, public, and national.
Too much “Just trust us” for those of us who were deceived by the lies of NCLB, RTtT, VAM. Marzano, Danielson, high-stakes testing. etc.
All were launched with good intentions and all ended up being hateful, controlling, punitive trash.
“not” allowed to see the edTPA. Sorry for the typo!
Thank you for sharing an alternative view of the edTPA. And they are right – the process itself is important, there is no denying that. However, as an assessment, an assessment that leads to a license it leaves much to be desired. First it is not educative – a candidate failing the test does not receive useful feedback, rather s/her receives a score on rubric that is unclear and subjective. Second, the scorers are not really all that well trained – I was a scorer and the alleged 20 hours was not real – it was more like 10-12 hours. This lack of training leads to scores that may not be even slightly representative of the candidate’s work. Third, it is a writing test. A twenty minute video does not supply even the slightest bit of insight into the candidate’s future performance as a teacher. Thus the candidate is forced to write his or her hand play guess how the scorer will interpret what was written. The final and most damning aspect is there is not a shred of predictive validity associated with this device. The device has not been in use long enough for any studies of predictive validity to have been done. So we are now making predictions using a device that may not be predictive. Finally the edTPA does remove the judgment of faculty and supervisors – the score on the device is all that matters, not the candidate’s academic prowess, not the candidate’s performance e across a student teaching experience – all that matters is that a scored somewhere else looks at the candidate’s response and makes a decision. How anyone can say that this does not remove faculty and supervisor expertise is beyond me.
Thanks for raising the issues. No evidence of the validity of EdPTA is available for teaching in all of the grade spans and subjects it is designed to assess. If you find the names and the institutional affiliations of the writers of the protocols for EdPTA, listed by grade level and subject area, let me know. The anonymity of the EdPTA authorship is totally out of step with the idea that the protocol is a product of professional judgment and consensus. The authors of the grade and subject specifications should be known.
This is a really interesting discussion and one that is well worth having. I am inclinded to reserve judgement on edTPA as a whole, in large part because the conversation here indicates to me that the implementation and timbre of the assessment varies a lot from place to place.
My experience with the edTPA is from the perspective of a teacher-in-training. I worked on my MAT specializing in secondary English at Vanderbilt in 2010 and 2011, and Vanderbilt was piloting a version of the assessment (just called the TPA, if remember correctly). My classmates and I, as part of our student teaching, completed all of the steps discussed above: designed a series of lessons specifically for the TPA, video taped parts of those lessons, prepared copies of student work with our feedback, and wrote lengthy explications of our pedagogical aims and outcomes. At the end of the process, I left with a mixed view of the assessment.
First, I want to say that the faculty at Vanderbilt prepared us exceptionally well for the assessment (some of my classmates would likely disagree with me, but I felt well prepared). The assessment was broken down and due dates staggered so that we could digest the tasks one at a time; our professors talked to us about what kind of lessons the TPA valued and what kind of language we needed to use in the written portion; and we were encouraged to prepare our materials strategically in advance to maximize our options and chances for success. All of these steps helped me feel comfortable with the assessment (despite the tremendous amount of work involved) and led to my “success” (I put “success” in scare quotes because of how the TPA results were reported to us, which I discuss below).
Thus, my professors did a great job preparing me for the assessment and used the TPA as an opportunity to talk about good teaching practices. That said, looking over the list of how they prepared us above reveals some of the potential problems with TPA noted by participants on this discussion board. First, we had to understand as we approached the TPA that the assessment valued certain kinds of teaching strategies and devalued others. For example, lessons where students worked independently and in which the teacher simply acted as facilitator fulfilled the requirements of the TPA much better than lessons in which the teacher played a more active and central role. While I believe that such student-centered lessons are effective and an important part of good teaching, they are not the only kind of good teaching. Similarly, the TPA demanded that students generate visible products as part of the lesson, so lessons in which students generated no physical artifacts were discouraged. I remember one of my classmates wanted to use Socratic seminar, a method that I can attest from my own teaching experience is a valuable and effective tool for getting students to explore a topic in depth and to see ideas from multiple perspectives, but he was told that the TPA would look unfavorably on such a lesson (to his credit, he tried it anyway and, I believe, passed the TPA; to the faculty’s credit, they clarified that just because the TPA does not reward a particular teaching strategy does not mean that such a strategy is worthless or “ineffective”).
Another noteworthy part of my preparation for the TPA lies in the strategic nature of preparing the materials. In other words, like any standardized test, some portion of one’s success on the TPA depends upon understanding the structure and values of the test itself rather than on the content being tested. We were encouraged to prepare extra video, for example, so that we would have plenty of footage to choose from, thus maximizing our chances of getting a good segment for the assessment. We also were told that even if our lesson did not go very well, we could still score reasonably well on the TPA if our written reflection acknowledged the lesson’s shortcomings and reflected on how to improve them. In short, doing well on the TPA required learning the ins and outs of the test, just as a student must learn the format of the SAT to maximize his or her score. One might see this aspect of the TPA as negative, though it may be more fair to view it as a neutral element common to any assessment (though in my view the problem of learning the test is mitigated on tests scored internally by people who know the students, as opposed to standardized external scoring, because the scorers have the opportunity to explain directly to the test-takers how they see the key elements of the test). Regardless, this sort of test-preparation is an element of the TPA that should not be ignored.
The two points I just explored–the way in which the TPA values certain teaching strategies over others, and the importance of preparing for the structure of the test–underscore the potential for narrowing of curriculum through the often-bemoaned practice of teaching to the test. I should note, however, that good teachers can use the specific requirements of the test as a springboard to explore different perspectives on the content, as my professors did (though I think good teachers will try to explore different perspectives regardless, so standardized testing cannot be defended on these grounds–just because teachers can do good work in spite of limiting assessments does mean that such problematic assessments are acceptable).
On the whole, I believe the TPA did have a positive effect on my teaching practice, insofar as it forced me to reflect on my teaching practice. It presented a formidable challenge, and my resolve to do a good job within the confines of the assessment forced me to think hard about what constitutes good teaching and why. Thus, by accepting the TPA as a kind of challenge to do excellent work, my work improved. However, even though I think the detailed reflection that the TPA required of me improved my teaching, that same reflection could easily have been accomplished through other means, and the biggest leaps I have taken to improve my teaching have come since I began working full-time in the classroom from trial and error, advice from colleagues, and my own personal reflections. Although the TPA may have encouraged me to be a reflective practitioner, arguably that same goal could have been accomplished through a less time-consuming, high-stakes means.
I should note also that I have always been good at taking tests, and have often been able to score well even without deeply learning the content; in this case, I like writing (could you guess?–sorry for the long comment) and have substantial training in written exposition, which probably improved my performance on the TPA. Thus, writing about my practice helps me, but that may not be true for everyone–not everyone is an English teacher. Additionally, we did not receive any detailed feedback once our TPA’s were scored. In aggregate, we were told that we had all passed and what some of our most common problems had been (in other words, what categories we scored well and poorly in on average). But since I did not receive specific information about where I did well or poorly, final reflection on my work was impossible.
I do not remember whether the faculty at Vanderbilt scored our TPA work themselves, whether they sent it off to be scored externally, or some combination of the two. But for some reason, they were forbidden to give us our specific results, or perhaps chose not to do so to prevent comparisons between us. I wish I could recall these details, but I cannot. I do remember, however, getting the impression that since our experience with the TPA was a trial run, our scores were inflated to some degree. In other words, since we were testing the assessment tool, our scores may have been curved. The important thing, though, is that regardless of curves, since our scores were not specifically reported, I cannot measure with any depth my own “success” on the TPA–I only know that I passed. This limitation highlights a perennial problem with standardized measures of subjective human enterprises–the mere existence of a standardized assessment breeds the illusion that good practice can be measured objectively, which may well not be the case; thus, I can see the merits of withholding detailed TPA score reports from student teachers to avoid the artificial notion that a higher score means that someone is a “superior” teacher. The test, then, can only be summative rather than formative–it is pass/fail.
In conclusion, I do not know whether the TPA as a whole is a good way to assess a student teacher’s potential or ability. I do recognize the value of detailed reflection, which the TPA encourages, as well as the value of challenging goals for student teachers. In an ideal world, I imagine the TPA as a flexible tool, one that could be adjusted by the faculties using it, one that would be scored internally by those working directly with the student teachers being assessed. If implemented this way, the TPA could provide an impetus for reflection and a substantial challenge to seek excellence, along with common guidelines for teacher preparation programs, while at the same time avoiding the problems of rigid standardization.
Thank you, Michael E., for this very informative “insider” viewpoint. You have confirmed some of my fears and alleviated some others. The experience is very similar to the National Board Certification process I went through several years ago.
Not giving detailed feedback is very problematic to me. I would be interested if that was a feature of the proto-test you took or if that is still part of the process. I know the NBPTS guarded everything so closely during my process that it sometimes hindered completion of components because we felt we were flying blind, in the dark, without a compass. That guarded secrecy is still problematic to me.
I’m proud that you are a colleague!
“our professors talked to us about what kind of lessons the TPA valued”
I see this as the central problem with the process. If you’re going to teach for a long time, you need to teach the kind of lessons that are valuable to your students and to you as their teacher, not to some psychometricians.
I haven’t been able to respond for a couple days, so I am getting caught up…I would agree with the poster who mentioned the lack of qualitative feedback with national scoring…this is an issue that I too agree needs to be resolved (potentially moving towards the NBCT model of having specific comments that could be taken from a pull down menu?)
That is also one reason why teacher preparation programs are encouraged to do local evaluation in addition to national scoring – with local evaluation one can provide the qualitative feedback to candidates (this can be done after submission to national scoring but before getting the scores back)
Chris – I wanted to clarify…I never said that those who were not for edTPA were not for inequality…I just raised the question…since many who are against edTPA have claimed that they don’t like standardization…I am just asking – some could see standardization as lowering standards, but what if it was viewed as raising standards for all? (helping ease inequality)
Michael,
I really appreciate your thoughtful analysis. You wrote:
“Although the TPA may have encouraged me to be a reflective practitioner, arguably that same goal could have been accomplished through a less time-consuming, high-stakes means.” Right!
Now or soon credential candidates will have no conception of life before the edTPA. They will have no idea of how their credential program has been narrowed and how the focus on edTPA prep has promoted a conception of teaching that is measurable by the rubrics rather than one that is founded upon the notions of social justice and empowerment. Their teachers will forget too, just as assembly line workers now have no conception of how automobiles were built prior to Ford. (See Braverman, Labor and monopoly capitalism.)
Exactly!
If our now or soon credential candidates can narrow the focus and accomplish the following, I am completely confident our P-12 students will benefit from this ‘narrow-mindedness’. After all, this is why we do what we do- preparing teachers to effectively teach and positively impact all students in any district:
Narrowing the Focus to:
1. Plans lessons that build on one another to support learning.
2. Uses knowledge of students to target support for learning (includes students with IEPs/504 plans along with individuals and/or small groups of students with similar needs).
3. Uses knowledge of students to justify instructional plans (prior learning in addition to students’ interests)- justifications are connected to research and/or theory.
4. Identifies and supports language demands associated with a key (content) learning task.
5. Develops informal and formal assessments that monitor students’ use of the essential strategy and requisite skills.
6. Demonstrates a positive learning environment that supports students’ engagement in learning.
7. Actively engages students in integrating strategies and skills to deepen understanding.
8. Elicits students’ responses to promote thinking and develop skills along with the essential learning strategy.
9. Supports students to apply the essential learning strategy.
10. Uses evidence to evaluate and change teaching practices to meet students’ varied learning needs- evaluation and changes are connected to research and/or theory.
11. Analyzes evidence of student learning.
12. Provides explicit, relevant feedback to students.
13. Provides opportunities for students to use the feedback in order to guide and further develop their students’ learning.
14. Analyzes students’ use of academic language to develop content understanding.
15. Uses the analysis of what students know and are able to do to plan the next steps in instruction- analysis and next steps are connected to research and/or theory.
Alan Singer’s Original Huffington Post
The “Big Lie” Behind the High-Stakes Testing of Student Teachers
http://www.huffingtonpost.com/alan-singer/the-big-lie-behind-the-hi_b_5323155.html
On April 30, 2014 the Higher Education Committee and the Education Committee of the New York State Assembly held public hearings on a new teacher certification evaluation system administered by Pearson known as edTPA. It was scheduled to take affect on May 1, 2014.
The Assembly hearing became moot before it was even held, at least temporarily, when the New York State Regents, the governing body for education, decided topostpone implementation until July 2015, although this year’s student teachers had already completed, submitted, and been graded on the project which included videos and commentary. Nobody discussed whether the student teachers would be refunded their $300 test fee.
State Education Commissioner John King said the delay was in response to requests from college faculty and others for a “safety net” covering teacher candidates. More likely it was a decision by Governor Andrew Cuomo to make peace with the state teachers union as he prepares for his reelection campaign. Education professors at the state university system who are members of the NYSUT, New York State United Teachers, had lobbied hard for the postponement.
Ironically, the New York State decision came at the same time that Arne Duncan and the federal Department of Education announced it would press to make similar teacher certification evaluations a national requirement. My recommendations for preparing edTPA portfolios may quickly become relevant nationally.
Although it is being used to evaluate student teachers for certification, the TPA in edTPA stands for Teacher Performance Assessment. Student teachers in my seminar suggested a better title would is “Torturous Preposterous Abomination,” although “Toxic Pearson Affliction” was a close runner-up in the voting.
All of my students passed the edTPA evaluation, including some who I felt were weak. In one case, two student teachers that handed in very similar packages received significantly different scores, which calls into account the reliability of the evaluations.
Statewide, the passing rate was 83%. One graduate student summed up the way the class felt about the procedure. “The whole process took time away from preparing in advance for future lessons . . . It really just added unneeded stress.”
Based on my experience this year with edTPA, I made the following recommendations to my university for preparing student teachers for the test in the future.
1. We prepare students as if New York State will eventually enforce edTPA while at the same time we organize to get the requirement dropped.
2. We keep edTPA submission as a university requirement for student teachers for 2014-2015 although those who fail will be permitted to take an optional exam.
3. edTPA and student teaching are not the same thing. Student teaching is about learning to be an effective and creative teacher. edTPA is about following directions. Some weaker student teachers performed very well on edTPA because they followed the format.
4. There were five keys for successful edTPA submissions.
a. Whatever format they use on a regular basis, for the edTPA submission, student teachers need to have a very structured lesson format that includes all edTPA requirements. They do not need to do this for every lesson – that would be torture – only for edTPA.
b. Students need to video an appropriate lesson. Students who failed, failed because of poor lesson choices. Sometimes they were too restricted by the cooperating teacher. If a teacher or school will not accommodate them, move them right away.
c. For videoing, set up the room as a television studio. Have the board, smart board, map, etc. right behind the student teacher. Have the students you want on camera right in front of the camera working as a team. Keep the lights on. You do not need field microphones. They make the whole thing too complex.
d. Student teachers must have evidence of informal and formal assessments. As they walk around the room they should interact with students and take notes on a note pad or electronic device. This needs to be seen on camera. Every lesson should end with a final writing assignment that will be submitted as evidence of three levels of performance.
e. After teaching and videoing the lessons, students should rewrite lesson plans to reflect what actually took place. Some colleagues feel that reviewers value writing over teaching since the reviewers do not actually see much teaching. This would explain why some weaker student teachers who are good writers scored higher than expected. I recommend that lesson plans and commentaries include a lot of action words – Students will analyze, evaluate, explore, examine, determine, assess — and that student teacher teams review and edit each others work before they submit portfolios to Pearson.
5. The new Pearson created ALST and EAS tests were harder than previous Pearson certification tests. Schedule a review as part of the student teaching retreat before the start of each semester. The key to success on the Pearson ALST is to remember they are not asking for the right answer or the best answer on the multiple choice part of the test, but for the answer they imbedded in the reading passage. Don’t argue with the test. Find their answer in the passage.
6. The Pearson EAS asks students to develop strategies for teaching diverse student populations, English Language Learners, and students with special needs. They get to pick the grade, subject, and topic. These essays are worth 30%. Students reported being unable to finish this task. They need to have strategies ready before the test.
Below is the testimony I e-submitted to the New York State Assembly Committee. I demanded that edTPA be dropped rather than postponed. I figure I will probably have to submit my testimony again next year when the postponement runs out.
Did Mike Trout learn to play baseball by writing a fifty to eighty page report explaining how he planned to play baseball, discussing the theories behind the playing of baseball, assessing a video of his playing of baseball, and explaining his plans to improve his playing of baseball?
Did Pablo Picasso learn to paint by writing a fifty to eighty page report explaining how he planned to paint, discussing the theories behind painting, assessing a video of his painting a picture, and explaining his plans to improve his painting?
Did you learn to drive a car by writing a fifty to eighty page report explaining how you planned to drive a car, discussing the theories behind driving a car, assessing a video of your driving a car, and explaining your plans to improve your driving?
Of course the answer in all three cases is a resounding “NO!” You learn to play baseball, paint a picture, or drive a car by playing baseball, painting pictures, and driving cars, not by writing about it.
Yet Stanford University, Pearson, and New York State are trying to sell the public that you learn to teach, not by teaching, but by writing about it. They also want you to believe that they have perfected a magically algorithm that allows them to quickly, easily, and cheaply assess the writing package and accompanying video and instantly determine who if qualified to teach our children. Maybe they plan to sell the algorithm to Major League Baseball next.
New York State is currently one of only two states that proposes to use edTPA to determine teacher certification. Not only should New York State postpone the implementation of edTPA, but it should withdraw from the Pearson, SCALE, Stanford project. edTPA distracts student teachers from the learning they must do on how to connect ideas to young people and undermines their preparation as teachers. Instead of learning to teach, they spend the first seven weeks of student teaching preparing their edTPA portfolios and learning to pass the test. Based on preliminary results on the first round of edTPA, most of our student teachers are pretty good at passing tests, so edTPA actually measured nothing.
Click to access 0114Voice4Web.pdf
Sometimes in the middle of intellectual dishonesty and double-speak, the spin doctors make a mistake and accidently tell the truth. SCALE, the Stanford Center for Assessment, Learning, & Equity, recently spilled the beans about its partnership with Pearson and the American Association of Colleges for Teacher Education (AACTE) to create edTPA. In a question and answer session at abreakfast meeting held at the annual conference of AACTE on March 2, 2014, SCALE representatives Sharon Robinson, President of AACTE, and Ray Pecheone, executive director of SCALE, discussed edTPA. The bottom line is that the companies and organizations pushing for these “innovations” and profiting from them are using students and teachers as their laboratory guinea pigs. They have no proof that these things really work and will improve education in the United States.
SCALE claims that “edTPA is a preservice assessment process designed by educators to answer the essential question: “Is a new teacher ready for the job?” However in a question and answer session at an AACTE breakfast meeting, when asked, “Why hasn’t SCALE completed any predictive validity research for edTPA? Has this been linked to variation in classroom performance in any way?” they had a very strange response.
“Predictive validity studies for licensure assessments are routinely conducted after a test or assessment has been in operational use. In fact, examining the validity processes used for other forms of performance assessment of teaching, there is not one instance where predictive validity was established prior to the adoption and operational use of the assessment . . . conducting predictive validity studies during a field trial introduces many sources of error that could compromise the results, including the main concern that candidates are not the teacher of record during clinical practice that certainly would confound the results of the study.”
In other words, despite their claims, there is no evidence that edTPA accurately measures anything.
Robinson and Pechione went on to say that “The implementation of predictive validity requires following candidates into their teaching practice for several years in order to obtain a stable estimate of student learning based on the research findings of value-added studies conducted for teacher evaluation.”
Again, in other words, they expect New York State to buy in, but they will have no idea about the reliability of edTPA for years until they can see whether teachers who scored high on the test actually become good teachers.
Meanwhile, student teachers are not being evaluated by trained field supervisors or cooperating teachers, but by temporary evaluators of questionable qualifications (they work online so they can be anywhere in the United States and have no familiarity with New York State) who are hired by Pearson, which is most noted for its famous Pineapple races a Hare passage on an eighth grade state reading assessment. As of March 17, 2014, Pearson was still trying to hire people to evaluate the portfolios. Pearson was requesting that “scorers possess both strong pedagogical content-specific knowledge and experience in roles that support teaching and learning in the edTPA content area in which they are scoring,” but had no procedure in place to evaluate the evaluators. The portfolios contain twenty minutes of video and between fifty and eighty pages of lesson planning and commentary, but evaluators were expected to complete their task in two hours and were being paid $75 per portfolio or $37.50 an hour if they work fast.
After reading the pineapple passage, students were asked to decide which animal spoke the wisest words, the hare, the moose, the crow, or the owl. Now it is your chance to speak. As the Grail Knight said to Indiana Jones in the movie “Indiana Jones and the Last Crusade,” “Choose wisely.”
UUP supports open and transparent edTPA review
Jamie Dangler, Vice President for Academics, United University Professions
I appreciate the opportunity to contribute to this important dialogue about the edTPA, particularly as it relates to what’s happening in New York State. United University Professions, which represents academics and professionals involved in teacher preparation programs at 17 SUNY campuses, strongly opposed New York’s rushed edTPA implementation. While most states are at various stages of developing their use of the edTPA or other performance assessments for teacher candidates, New York moved quickly to make the edTPA a high stakes certification requirement.
UUP members from across the state reached out to their union leadership asking for help as they feared irreparable damage to their programs and to a cohort of students caught in a poorly planned rollout of the edTPA. UUP’s officers spent considerable time through the 2013-14 academic year meeting with faculty and staff at our campuses and working with our UUP Teacher Education Task Force, which includes representatives from all SUNY campuses with teacher preparation programs.
We also worked jointly with the Professional Staff Congress (from CUNY) and our affiliate New York State United Teachers to get input from faculty, staff, and students to document both positive and negative experiences with the edTPA. Our efforts included many meetings at campuses across the state, collection of testimony through a portal on the NYSUT website, and a survey of our members involved in teacher preparation. Our information-gathering revealed overwhelming concern about edTPA implementation problems at all of our campuses. Other concerns voiced by our members include scoring and validity issues, unfunded costs to programs for edTPA implementation, parental consent and privacy protection for videotaping in K-12 classrooms, displacement of sound performance assessment practices already in place, and limited public accountability given that edTPA materials are “proprietary information” and cannot be shared by faculty who must sign non-disclosure agreements.
Despite accusations hurled at UUP and other unions, expression of these concerns does not mean we are opposed to developing new and better performance assessments. We join educators around the state in calling for the best for our students, our programs, and our state’s educational system. But achieving the best possible educational practices will not happen without more serious attention to the advice and expertise of on-the-ground educators who understand the complex challenges we face as we do our best to educate a diverse population of students in vastly different settings across our state.
In conjunction with PSC and NYSUT, UUP worked with students, parents, and colleagues from private colleges to share information and document edTPA problems. We pressed legislators and the Board of Regents to look more closely at SED’s implementation of the edTPA and the potentially destructive impact on many students and teacher preparation programs across the state. The New York State Assembly Higher Education and Education Committees sponsored a legislative hearing on April 30, which brought educators and students from around the state together to give testimony.
On April 29, the Board of Regents instituted a safety net that delays full implementation of the edTPA. Teacher candidates who do not pass the edTPA will be allowed to use a passing score on the Assessment of Teaching Skills-Written (ATS-W) for this component of their initial certification through June 30, 2015. In conjunction with the safety net, the State Education Department has been charged to form a task force to review the edTPA situation in New York State. Its members will include representatives from UUP, PSC, P-12, the Commission on Independent Colleges and Universities, and the Teacher Education Advisory Group. The Stanford Center for Assessment, Learning and Equity (SCALE) will be included in Task Force discussions.
As UUP President Fred Kowal noted in a May 14, 2014 letter to the Daily Gazette: “Delaying the edTPA certification mandate and creating a task force to refine it will allow education professionals and SED jointly to develop a program that will properly prepare students to teach. The faculty involved in teacher education programs must have a voice in determining appropriate assessment and certification tools and developing realistic timetables for implementing them. We are not against developing better performance assessments, but those changes must be made correctly and cooperatively. Now there is time to get it done right.”
Please visit UUP’s Teacher Education Task Force’s web page for more information about our efforts:
http://uupinfo.org/committees/teached/taskforce.php
Excellent rebuttal from Alan Singer:
http://www.huffingtonpost.com/alan-singer/scale-and-edtpa-fire-back_b_5506351.html