Education Next is a conservative journal that can be counted on to support education reform in all its manifestations.
However, today it is releasing a new study finding that the most ineffective way to rate teacher education programs is by the test scores of students taught by their graduates. As we have often said, VAM (value-added measurement), beloved by Arne Duncan, is a sham. The now discredited rule was promulgated by the Obama administration.
Ranking teacher-prep programs on value-added is ineffective
New analysis finds program rankings based on graduates’ value-added scores are largely random
Last year Congress repealed a federal rule that would have required states to rank teacher-preparation programs according to their graduates’ impact on student test scores. Yet twenty-one states and D.C. still choose to rank programs in this way. Can student test performance reliably identify more and less effective teacher-preparation programs? In a new article for Education Next, Paul T. von Hippel of the University of Texas at Austin and Laura Bellows of Duke University find that the answer is usually no.
Differences between programs too small to matter. Von Hippel and Bellows find that the differences between teachers from different preparation programs are typically too small to matter. Having a teacher from a good program rather than an average program will, on average, raise a student’s test scores by 1 percentile point or less.
Program rankings largely random. The errors that states make in estimating differences between programs are often larger than the differences states are trying to estimate. Program rankings are so noisy and error-prone that in many cases states might as well rank programs at random.
High chance of false positives. Even when a program appears to stand out from the pack, in most cases it will be a “false positive”—an ordinary program whose ranking is much higher (or lower) than it deserves. Some states do have one or two programs that are truly extraordinary, but published rankings do a poor job of distinguishing these “true positives” from the false ones.
Consistent results across six states. Using statistical best practices, von Hippel and Bellows found consistent results across six different locations—Texas, Florida, Louisiana, Missouri, Washington State, and New York City. In every location the true differences between most programs were miniscule, and program rankings consisted mostly of noise. This was true even in states where previous evaluations had suggested larger differences.
When measured in terms of teacher value-added, “the differences between [teacher-preparation] programs are typically too small to matter. And they’re practically impossible to estimate with any reliability,” say von Hippel and Bellows. They consider other ways to monitor program quality and conclude that most are not ready for prime time. But they do endorse reporting the share of a program’s graduates who become teachers and persist in the profession—especially in high-need subjects and high-need schools.
To receive a copy of “Rating Teacher-Preparation Programs: Can value-added make useful distinctions?” please contact Jackie Kerstetter at jackie.kerstetter@educationnext.org. The article will be available Tuesday, May 8 on educationnext.org and will appear in the Summer 2018 issue of Education Next, available in print on May 24, 2018.
About the Authors: Paul T. von Hippel is an associate professor at the University of Texas at Austin and Laura Bellows is a doctoral student in public policy at Duke University.
Why should we rank teacher education programs at all? We are too much about stacking each other in piles than helping each other out. Deciding what is better for a child is very much relative to his unique reality. Deciding what is better for an aspiring teacher is like that too.
Thank you! Beat me to it.
Maybe teacher preparation programs should challenge the validity of this move in court the same way some teachers challenged VAM. What is at work here is the intention to undermine legitimate teacher preparation programs in order to peddle fake programs like Relay. This is all part of the bigger picture that intends to deprofessionalize teaching and turn it into a low wage, low benefit temp job.
I was privileged to serve as an alternate during negotiated rule making around the Title II regulations and teacher education regulations. Arne Duncan was the Secretary of Education. The teacher education faculty fought to keep value-added measures from becoming part of the regulations but we were set up from the beginning. Those members who were convinced that teacher education was a total waste of time used value-added measures to assess teacher performance and ultimately, use the data to rank teacher education programs. I am so happy to read that this approach has been discredited.
“Cranking out the Numbers”
VAMdumb numbers
Teacher ranks
Duncan blunders
Work of cranks
Pseudo-science
Crackpot model
VAM reliance
Mathy twaddle
“Can student test performance reliably identify more and less effective teacher-preparation programs?”
HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA. . . ADINFINITUM.
To even pose that question demonstrates an ignorance of such depth that it reaches sheer stupidity.
“Down the Rabbit Hole”
The rabbit hole is long
But once you pass the door
You’ll sing a crazy song
Of standardizing score
Of “measurements” with VAM
As random as a die
It really is a sham
But Hatter rules apply
What is MORE important to ask of those ridiculous NUMBERS is WHAT THEY DON’T and CAN’t ever tell us about what is really going on. A number is a number…numbers like words NEED CONTEXT.
TN is doubling down on this and using the VAM scores to give grades to university teacher prep programs.
Tell Candace McQueen that it doesn’t work.
Her queststar testing does not work either. Third year in a row failure of the attempt at testing. Nor does she answer the Swacker question: how do you measure learning? Of course not, she cannot.
Minor correction Roy, well actually it is quite major. My question is: “What is the agreed upon standard unit of learning that is [not] being used to supposedly “measure” learning?
You are correct to state that one cannot measure learning.
“But they do endorse reporting the share of a program’s graduates who become teachers and persist in the profession—especially in high-need subjects and high-need schools.”
These are absurd criteria for judging any professional preparation program. The criteria are even absurd for judging a postsecondary program for a registered nurse, electrician, or auto mechanic.
Suppose these criteria were applied to all undergraduate programs with a spotlight on the proportion of graduates who do not choose to work in the job category for the program (in this case teacher) or in one of the “high need” specializations and “high need” venues for work with these venues defined by demographic characteristics of a state or geographic region ( or other indicators of “high need)
Suppose that the program would also be judged by the persistence of graduates in a single job category (e.g., no mobility from teaching to another role in education, another geographic region, or in another line of work).
The point of this thought experiment is to show that the premises of program evaluation being suggested for teachers are designed to restrict teacher prep to flawed concepts of static labor market demands, venues for work, and conditions for work. The criteria are asserted as if faculty in higher education and students enrolled in a program are not entitled to question the criteria.
This concept of accountability is being marketed without regard to its kinship to authoritarianism. The criteria are aimed at denying the possibility for academic freedom and independent judgments by faculty and students. The criteria are designed to justify programs that train teachers to be worker bees, not programs that will educate them.
This does not seem inappropriate to me. A professional education should do a good job preparing a student for the targeted profession. Surely one of the reasons that a person would not persist in a profession would be inadequate preparation.
A second reason for this transparency would be to help students to make decisions about their future. If they know that very few students in a professional school, say a school of music, become professional musicians of any sort, they can better evaluate the hours devoted to a fine arts degree (typically far more hours than any other degree granted at a university)
Do you have any idea of the number of jobs that a graduate of a music school might pursue? How about art school graduates? Engineers? Why does pursuing a teaching degree mean that someone must spend their entire life as a teacher?
Who said anything about spending a lifetime?
It seems to me that when a student spends four years preparing for a career at a school specifically designed to educate the student to succeed in that career, having many of those student leave the career after a year or two should raise some red flags. It suggests the possibility that the students are either 1) not well prepared for the career or 2) were not given a realistic understanding of what the career entailed.
Teachers in Finland spend five years preparing to teach. They don’t leave the profession. It is highly respected and treated as equal to other professions, like law and medicine and architecture. They are government employees and they belong to a union. They like their work.
Dr. Ravitch,
It appears we are in agreement here. When there is excellent professional education preparing students for a career, like in Finland, we see very few leaving that career. When there is poor preparation, we see many leaving that career.
Omigod, that is a first.
However, I don’t agree that teachers leave because they are poorly prepared.
They leave because they can’t live on a teacher’s salary.
They leave because working conditions are poor.
They leave for many reasons having to do with lack of public support for teachers, not because they are ill prepared.
I certainly agree that teachers might leave because the salaries are low and/or they find the working conditions unacceptable, but shouldn’t it be part of professional education to inform students of the salaries and working conditions in the profession?
Many many posters over the years here have argued that poor teachers leave teaching after a relatively short time, helped to this decision by experienced teachers. It is this group of teachers leaving the profession that I had in mind when thinking of poor preparation. Do you think it an insignificant number?
I don’t have the numbers, but believe that many experienced teachers are leaving and have left the profession because of external pressures, like state testing, that did not exist when they started. And the lack of adequate funding to control class sizes and support the needs of students with disabilities and ELLs. And low salaries compared to others with similar education.
I suspect that a few Teach for Awhile recruits might actually leave because they were unprepared — but probably not many cuz, you know, a 5 week Summer Camp with the likes of Michelle Rhee is undoubtedly enough to whip anyone into shape (quite literally)
And of course, there is the famous Safari in the Hood (where TFA newbies drive around in inner cities to learn about the native life) to complete the experience. How could one ever top that when it comes to teacher training?
This is a rough look at attrition across years of teaching experience.
In 1999-2000
11% of teachers had less than 3 years experience
28% of teachers had between 3 and 9 years experience
29% of teachers had between 10 and 20 years experience
32% of teachers had over 20 years experience
In 2015-16
10% of teachers had less than 3 years experience
28% of teachers had between 3 and 9 years experience
39% of teachers had between 10 and 20 years experience
22% of teachers had over 20 years experience
A higher percentage had regular certification in 2015-2016 than 1999-2000.
Source: https://nces.ed.gov/programs/coe/indicator_clr.asp
TE,
Have you checked the dramatic decline in enrollments in teacher education programs?
Dr. Ravitch,
Before moving on to your new point, here is some more information pertaining to your original point.
Age distribution of teachers
1999 – 2000
Under 30 17%
30-39 22%
40-49 31.8%
50-59 26.2%
60 and over 3.1%
2015 – 2016
Under 30 15%
30-39 28.5%
40-49 27.4%
50-59 21.5%
60 and over 7.6%
We have relatively fewer young teachers and relatively more older teachers now than at the turn of the century. I am not sure much can be made of the differences in age distribution between 30 and 60, but I look forward to any comments that might be made.
Data source: https://nces.ed.gov/programs/digest/d17/tables/dt17_209.22.asp
Dr. Ravitch,
Relative to your new point, overall we are likely to need some more teachers in the future than we currently have, as the school age population is projected to grow by about 3% between 2014 and 2026, but this growth is very uneven across states. The northeast, for example, will need fewer teachers if the NCES forecasts are correct. School age population is expected to decline by 14% in New Hampshire and Connecticut, 12% in Maine, 10% in Vermont and Michigan, 7% in Pennsylvania, 6% in Ohio, Mississippi, and New Jersey, 5% in Rhode Island and Illinois, 4% in Wisconsin and Massachusetts, 3% in Indiana, Alabama, and West Virginia, 2% in California, and 1% in New York and Missouri. These states should probably train fewer teachers.
The states with more students are largely in the south and west (with the exception of California). These are likely to need more teachers. It will take a bit more work to find enrollment trends in teacher education by state.
TE, I suggest that you inquire into the collapse of enrollments in teacher preparation programs. Start by talking to the teacher educators at your own universities.
California has seen a dramatic decline. When I visited a major university in Michigan recently, I learned their enrollments had dropped by 60%. That is not minor.
Appologies for leaving the data source out of the post above: https://nces.ed.gov/programs/coe/indicator_cga.asp
A quick look at Michigan shows the largest drops in teacher education students at private colleges, but increases at some public universities. This switch from small private colleges to public universities is perhaps a good thing. Overall current enrollment in Michigan schools is currently at the lowest point since the 1950s, and as pointed out before, the school age population is expected to drop by another 10% in the coming decade. Clearly Michigan will need fewer teachers going forward than in the past.
There was certainly a drop in enrollment between 2013-2014 and 2015 – 2016, but unfortunately, there is little data beyond that, so it is difficult to tell how the single year drop fits into longer trends. It may be a comfort that Dan Quinn, director of The Great Lakes Center for Educational Research and Practice, does not believe that this portends a teacher shortage in the near future, but will have some impact down the road.
Data sources: https://www.bridgemi.com/talent-education/fewer-college-students-want-be-teachers-and-why-it-matters-searchable-database
https://title2.ed.gov/Public/Report/StateHighlights/StateHighlights.aspx?p=2_01
I visited Western Michigan University. It is a public university. Teacher enrollments are down 60%.
Well, if anyone cares to look they can find the increases at UM-Dearborn and Ferris State in the data links.
TE, if there are increases at those two locations they are counter to a strong national trend. Since when do economists base policy on outliers?
Maybe there is a disconnect between traditional teacher preparation programs and the classroom. Traditional programs still prepare teachers for a professional career in teaching and all that implies. However, many classrooms today are micromanaged from above with scripted curriculum and mandates about how a classroom is to be run. Perhaps that is why the Relay grad school is so popular. They are not training teachers but good worker bees who can spout the curriculum in true robotic form. Anyone who is trained to be a teacher in a traditional program who lands in a school that marches to a particularly orchestrated reform agenda is likely to exit early. I will agree that mentorship efforts are not up to par even in the public schools (for various reasons, not always legitimate) but to blame teacher attrition on the teacher prep programs is a far from adequate reason for the teacher retention problem. Teachers are not walking out because their prep programs were poor.