We’re All Mad Here: The Conference on English Education’s (CEE) Response to the US Department of Education’s Proposed Regulations for Teacher Preparation
On Dec. 3, 2014, the United States’ Department of Education (DOE) released a document proposing new regulations for teacher preparation programs, citing the need for greater accountability for teacher preparation programs, as well as the development and distribution of data focused on the quality of those programs. The public was then invited to comment on the regulations, with the comment period closing on Feb. 2, 2015. Note, however, that the Office of Management & Budget “is required to make a decision regarding the collection of information contained in the proposed regulations between 30 and 60 days after publication of the proposed regulations.”3 For full consideration of the public’s response, therefore, comments should be submitted by Jan. 2, 2015.
The Conference on English Education (CEE) urges its membership, as well as teachers, parents and students, to make use of this public comment period to respond to the proposed regulations – ideally by Jan. 2.
These regulations are disingenuous at best, hypocritical at worst, in their misrepresentation of and approach to quality teacher education. Therefore, we must state clearly and forcefully – to the DOE, as well as to US senators, state representatives, university presidents, state superintendents, school principals, teachers, students, neighbors and the public at large – that the proposed regulations will do more harm than good.
Whether online, through the media or in person, we must speak against the misguided beliefs driving such regulation: that teacher performance can be equated to student performance; that standardized tests provide meaningful evidence of learning; that student learning occurs in a vacuum; that there is one set approach that works with all students. We have been invited to speak, and we must accept the invitation – although it feels a bit like being invited to the Mad Hatter’s tea party, doesn’t it? “But I don’t want to go among mad people,” Alice remarked. “Oh, you can’t help that,” said the Cat: “we’re all mad here. I’m mad. You’re mad.” “How do you know I’m mad?” said Alice. “You must be,” said the Cat, “or you wouldn’t have come here.”
Despite very little evidence to support its efficacy for student learning, standardized testing has claimed our classrooms. “Objective” data drives decision-making rather than the “subjective” issues that affect the children we seek to educate. Teachers are constantly labeled as ineffective, uncaring, unprepared. Patently unqualified corporations, millionaires and for-profit businesses are invited to “solve” educational issues while patently qualified teachers, teacher educators and educational researchers are excluded from the discussion.
The document is found at http://www.regulations.gov/#!documentDetail;D=ED-2014-OPE-0057-0001
To do so, visit http://www.regulations.gov/#!submitComment;D=ED-2014-OPE-0057-0001
For additional information, view Jane West’s webinar: http://ceedar.education.ufl.edu/wpcontent/ uploads/2014/12/Teacher-Preparation-Regulations-for-CEEDAR.pdf
For an excellent example, see Anne Elrod Whitney’s piece Proposed Regulations Bad for Kids, Teachers, and Schools: http://writerswhocare.wordpress.com/2014/12/08/proposed-regulations-bad-forkids- teachers-and-schools/ And now, teacher education programs have moved into the line of fire. If the proposed regulations are to be believed, teacher preparation currently functions with little accountability, producing poor quality candidates whose abilities are not properly assessed. The evidence for such claims consists of flawed measures and unreliable research from questionable sources.
Yet, the answer to this (unproven) assumption is to increase assessment and accountability measures, despite no evidence that these measures have been beneficial as implemented in the public schools. Madness. Teacher preparation programs are, indeed, held accountable; they undergo assessment; they use data to inform their decision-making processes.
As the professional organization for English teacher education, CEE created the Standards for Initial Preparation of Teachers of Secondary English Language Arts 7-12; revised in 2012, these standards delineate the required competencies of knowledge, skills and dispositions connected to content, pedagogy, learners and professionalism. The Council for the Accreditation of Educator Preparation (CAEP) uses these standards to assess and recognize the abilities of English teacher education programs to prepare quality secondary English teachers. To meet these standards, programs must gather, analyze and report a wide range of data from both the program and the candidates. This external accountability is in addition to the internal accountability of the programs themselves. In-house, as it were, teacher preparation programs must remain cognizant of and respond to the internal and external pressures driving education in order to prepare teachers for the classroom.
Do some teacher education programs fail in this endeavor? Admittedly, yes. But the way to improve our teacher education programs is not with more assessment and accountability, measures in and of themselves that are already present and valued in higher education. Could these measures be improved? Certainly, as any educator knows. Teacher education programs recognize the need to improve our efforts to gather better data from and about our graduates; we are constantly revising our means of candidate assessment in order to respond to our needs and the requirements of an outside accrediting body.
What we don’t do is expect the test scores of our graduates’ students to provide a worthwhile measure of their teacher’s efficacy. Value added measurement (VAM) has little support among those with the ability to understand the nuances of assessment5, much less those of teaching and learning. Parents certainly do not support the current over-testing of their children; teachers know that reliance on externally developed high-stakes tests offers a distorted view of a child’s abilities; teacher educators recognize that assessment is a nuanced process that requires multiple measures over time. We know that assessing teachers’ worth on the test scores of the complex human beings they teach is a deeply flawed measure of ability, with no recognition of the many factors influencing both teaching and learning. Rather than admit this and seek better ways to determine quality teaching, however, the US Department of Education now proposes to assess the teachers of the teachers’ worth on those same test scores. Madness. Alice laughed. “There’s no use trying,” she said: “one can’t believe impossible things.” “I daresay you haven’t had much practice,” said the Queen. “When I was your age, I always did it for half-an-hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast.”
For more on those nuances, see the American Statistical Association’s Statement on Using Value-Added Models for Educational Assessment: http://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf These regulations promulgate beliefs that those in education know to be false: that there is one right measure of learning, that there is one right method of teaching, that there is one right type of teacher, that there is one right way to prepare teachers. Teaching is a complex, complicated, challenging, often contentious, endeavor because those we seek to teach – and the subjects we seek to teach them – are complex and complicated and challenging and, often, contentious. We understand, though, that teacher education creates the foundation that our students build on for the rest of their teaching career rather than hubristically assuming that we can boil teaching down to a set of “one size fits all” approaches that will serve in any situation.
Teacher education programs educate prospective teachers to understand, examine and respond to issues of content, pedagogy, learners and learning. It isn’t an easy job – hence the diversity of approaches and the ongoing assessment of those approaches in teacher education programs around the country. While the foundational principles of education may remain the same, English education programs in New York City are not – and should not be – the same as those in Cheyenne. What my students in West Lafayette, Indiana need to know in order to teach a largely rural population differs from what my colleague’s students in Tampa, Florida need to know in order to teach a largely urban population.
Yet, every day, we in teacher education embrace this difficult task of preparing young men and woman to respond as experienced professionals to every possible combination of factors they will meet in their future classrooms. These regulations trade on the common complaint that many beginning teachers feel unprepared when they first enter the classroom, pointing back to a lack of preparation from their teacher education programs. Solidifying such unproven cause and effect into ill-suited regulation belies the many factors that shape a teacher’s entry into the classroom: the type of school, the level of support, the number of resources, the diversity of student issues in addition to the teacher’s individual abilities, understandings and personality. Assuming that this one factor – how teachers are prepared – contributes to the high rate of teacher turnover is yet another unproven cause and effect. Teachers don’t leave simply because they aren’t prepared well. They leave because political, social and rhetorical conditions in this country destroy their will to teach. And those conditions are now poised to destroy teacher education.
Has it occurred to no one (except educators) that one reason teachers leave the classroom is because many schools have become unpleasant places to be? This has less to do with their preparation – teacher education programs cannot control the factors their students will meet upon entering the classroom – and everything to do with the current climate in this country surrounding teachers and education. Why would anyone want to enter a profession that is continuously attacked, denigrated and demeaned in every public avenue? And, yet, I have students in my college classrooms wanting to do just that. These bright young women and men are cognizant that their choice of career is held in little regard; they understand that they will work long hours for little external reward; they accept that the public will disregard their intelligence, their ability and their commitment in seeking to become English teachers. They want to teach, however, because they want to do something meaningful with their brains and their bodies.
These young college graduates willingly take on an astounding level of responsibility from their very first day in the classroom because, as one of my students wrote recently, “How are we, as future teachers, supposed to challenge our students if we never challenge ourselves?” “Take some more tea,” the March Hare said to Alice, very earnestly. “I’ve had nothing yet,” Alice replied in an offended tone, “so I can’t take more.” “You mean you can’t take LESS,” said the Hatter: “it’s very easy to take MORE than nothing.”
At this point in our country’s history, teachers and teacher educators are doing their best with more of nothing: no public support for their work, no understanding of their professionalism, no recognition of the contributory factors to student learning. That extends to the teacher education programs that prepare them. We work against the fallacy that teacher education at the college level is of little benefit, that sixweek boot camps can prepare anyone for the classroom, that those with no understanding of or background in education are better suited to do our work. The US DOE regulations of teacher education programs cost more time and more money – millions, in fact – while implementing an assessment system in higher education that has proven seriously flawed in the public schools. They assume a reductive approach to teacher preparation that belies the complex factors teacher education programs must navigate to educate their candidates. They dismiss the solid work happening in teacher education programs every day throughout the country in favor of pushing an agenda that neither conforms to reality nor recognizes expertise.
Like Alice, we need to push away from our seat at this table by clearly speaking against the misguided beliefs propelling these regulations. We need to publicly proclaim this party for the madness it is, opposing those who lead it and shaking those who slumber while it happens. We know better, as teacher educators. Every day, we do better, as teacher educators. It’s time we spoke up, as teacher educators, and established that we are better at assessing our students’ abilities as teachers than the measures proffered by these fundamentally flawed regulations.
Respectfully submitted,
Melanie Shoffner, PhD Chair, Conference on English Education
Wile VAM is flawed in any use as an evaluation instrument, and we should oppose its use, there might be something positive in Duncan’s assault on teacher programs – that would be a wake up call.
Up to now the assault has been on K-12 and except for some notable voices, far too many in universities in general and education departments in particular have stayed on the bench (ivory bench) as the reformers go after K-12; in the worst cases they have colluded in preparing students fit the test-prep school culture.
So now perhaps we might have more allies.
And when do we hear from the comparable groups of teacher education faculty and academics concerned about education in mathematics, in the sciences, in social studies, in the arts, foreign languages, health, physical education, and all of those who work to prepare teachers of students who are learning English, and groups representing teachers of students eligible for special education, and teacher unions, and parent groups and administrators and school boards? I think this expression of concern is wonderful, but late, and that USDE’s schedule for receiving comments is no accident. Many distractions over the holidays.
It’s not a bad idea to require teacher preparation programs to keep better track of what their teacher candidates are learning; the problem is in conflating the teacher’s skills with the skills and learning of their students. We really need a much more refined approach. My recent comments on the proposed College Ratings Framework apply to the teacher prep programs, too. Teacher prep programs are a subset of college programs in general. http://developingprofessionalstaff-mpls.blogspot.com/2014/12/measuring-student-learning.html
You’re assuming that “learning” is something that can be measured. That’s true only if “learning” equals facts poured from the teacher’s mouth into the students’ heads.
Dienne,
Is there something about my use of the word ‘measure’ that’s causing you a problem? Or, are you asserting that teachers don’t or can’t assess and evaluate ( measure) student learning?
True learning is not something that can be measured. Learning is much more than what can be regurgitated on a standardized test. It’s about constructing knowledge and understanding of the world. Not memorizing teacher-imparted facts or mastering teacher-directed skills. The type of “learning” that can be measured actually undermines real learning because it’s all about striving for someone else’s “correct” answer.
Real evaluation involves the quality of students’ creativity. Measurement precludes creativity because creativity cannot be standardized as its very definition precludes objective measurement.
Teachers can know whether their students are learning problem solving skills, but to measure that in a way that can be stretched across students (standardized), test questions become blunt instruments as a question is either right or wrong – the questions aren’t rated on a continuum like learning is. It is presumed if students answer enough questions then the final result will reflect where on the continuum the student is, but we have myriad examples, most notably at the very top and bottom of the spectrum, that it is clear this model does not work for those students and their teachers.
But hey, let’s go with the solution that’s not only imperfect, but horribly irreparably flawed because it lets us achieve our measurement goals over what is best for students. The measurement becomes the goal, not the learning, whereas in evaluation, how students are judged is more of a passive measure of student gains (not nearly so high stakes, more based on what happens in a particular classroom and not between classrooms – the constant conundrum between going at the rate the students are learning vs. the necessary pacing to achieve outside content curriculum goals).
When medical schools are rated by how many patients their graduates cure, then this will be a good idea. But let’s see how many choose plastic surgery over oncology.
My response submitted to the gov’t regulation site:
I am stunned that any informed decision maker would consider Value Added Measures as an effective method to improve teacher preparation. My background as a teacher, elementary administrator, middle school administrator, K-12 coordinator of gifted/talented programming, and school improvement coach to principals and teachers who are receiving a MA in urban studies gives me a rather broad perspective. 23 of my 43 years of experience have been in Tennessee, the birthplace of value added. The formula used to “measure” teacher effectiveness is riddled with assumptions that ignore affective issues that motivate and provide a sense of security between teachers and students. Additionally, how can a formula that came from agriculture find its way into the academic realm and applied to human subjects? I have witnessed, first-hand, the terrible ramifications, particularly in poor, urban schools, of the absurdity and misuse of “data” to make “objective” decisions about careers and school closings.
Value Added in the 1990’s was used as a very private calculation and consdiered during 1:1 teacher/administrator discussions to provide information regarding how effective the teacher was with three different groupings of students; below average, average, and above average. In many cases, it could reveal that teachers were targeting instruction to the lowest groups, thus leading to a professional development plan to differentiate instruction which would then meet the needs of all students. With the emphasis on test scores under NCLB, and heightened during the era of Race To The Top, VAM has been promoted as a means to make judgments for which this calculation should never be used.
I have experience in rural, suburban, inner-city, affluent, and “high priority” schools. VAM has been problematic in every situation. The two extremes, inner city / affluent, demonstrate two extremes of the problems with VAM. In the affluent school, it is possible to score at the highest rankings for academic achievement and the lowest for VAM. In a quote from Dr. Bill Sanders, “There is no ceiling effect when calculating VAM.” I know that to be ridiculous because of my ten-year association with gifted and talented students. Yet, as principal, I was held accountable for the low VAM scores when it was unknown just how academically proficient some of my students were.
On the other end of the spectrum, inner-city poor, VAM can look quite good, while academic achievement is at the bottom when compared to other students in the district. Yet issues of special education and non-English speaking students find no place in this argument, let alone the influences of poverty. In fact, under new regulations, students with IEP’s will be given the same test as their regular ed peers – of course with accommodations – but with no consideration of intellectual capability. Is there any parent that would prefer a child to be 30 points below average IQ (Tennessee’s definition of mental retardation) vs. 30 points above average? Students do form a bell curve when cognitive ability is measured, but that has no place in this discussion of VAM.
Finally, I am totally stunned that group assessments are used to make serious decisions about student achievement and teacher effectiveness. In both gifted/talented and special education ANY decision about student placement must be done with a battery of individual assessments due to the poor reliability of group tests. Why is this ignored? I have personally administered hundreds of individual cognitive ability tests (IQ tests), as well as hundreds of individual achievement tests. This important issue, the difference between individual and group tests, never seems to find its way into the discussion of “objective” measures to compare one school with another.
Given my depth of experience, I must register my GREAT disdain regarding any suggestion that VAM will be beneficial in teacher preparation programs. One should look to Finland where they have addressed the issue of teacher preparation, AND selection to teacher prep programs, while limiting the number of universities given the licenses to train new teachers. Those changes, PLUS en extraordinary nod to continuous professional development and a true respect for the profession have had extremely positive effects on student achievement in that country.
The MISUSE of VAM is immoral and should be terminated immediately!
Below is the message I sent to as many Federal sites as I could find. Thank you for keeping us informed.
If you believe this is an intellectually good idea, then we need to judge doctors and hospitals by their patients health and how long they live or how well they do on their blood tests! If you agree with both ideas, then you should be institutionalized because you may cause harm to others.
The best way to stop VAM may be to embarrass the academics who lend it faux credibility.
Perhaps this might be accomplished by writing emails referencing the American Statistical Association’s VAM position paper to the heads of statistics departments at high profile universities like Harvard, MIT, Cal tech, Princeton and copying the email to VAM supporters like Raj Chetty.
Many (if not most) statisticians may not be aware of the kind of junk statistics that are being used to support VAM (in some cases by their own university colleagues).