Archives for category: Education Reform

Despite a Supreme Court ruling that immigrant children without citizenship status have the right to free public schooling, Fox News has taken a strong stand in opposition, according to Media Matters for America.

Its researchers write:

Fox News Decries Granting Undocumented Children Their Right To Access Public Education

Fox News personalities criticized a plan allowing newly arrived child migrants access to public education as “tragic” and dangerous, despite a Supreme Court decision guaranteeing all children access to education regardless of immigration status.

Fox Figures Complain That Refugee Children Receive Taxpayer-Funded Education

Fox Guest: It Is “Tragic On So Many Levels” For U.S. To Educate Immigrant Children. On the August 12 edition of Fox News’ Your World, host Neil Cavuto invited conservative talk show host Gina Loudon on to criticize the fact that undocumented immigrant children receive public education. Loudon claimed it was “tragic on so many levels” for the U.S. to educate the undocumented children, adding that without criminal background checks and health screenings, schools won’t know “if this student is a murderer” or “has one of the diseases that we’re hearing about coming across the border.” [Fox News, Your World with Neil Cavuto, 8/12/14]

Fox’s Tucker Carlson: “But What About The Rights Of The Kids Who Were Born Here?” On the August 11 edition of Fox News’ Fox & Friends, co-host Tucker Carlson responded to the notion that it is the United States’ legal obligation to educate children who come into the country by saying, “But what about the rights of the kids who were born here, the American citizens who presumably have the right to a decent education and aren’t getting one because of this?” [Fox News,Fox & Friends, 8/11/14]

Fox Business’ Buttner: “Forget The Ebola Scare. Is It Really The Back To School Scare?” On the August 10 edition of Fox Business’ Bulls and Bears, host Brenda Buttner questioned whether parents should be concerned with “a surge of up to 60,000 illegal kids in their classrooms.” Buttner exclaimed, “Forget the Ebola scare. Is it really the back to school scare?” Fox Business reporter Tracy Burns later insisted that “we have to take care of our own first.” [Fox Business, Bulls and Bears, 8/10/14]

REALITY: School-Age Children In America Are Guaranteed Equal Access To Education, Irrespective Of Immigration Status

American Immigration Council: Supreme Court Guaranteed Undocumented Immigrant Children Equal Access ToEducation Under The 14th Amendment In Plyer v. Doe. In 1981, the U.S. Supreme Court ruled that a Texas statute which aimed to withhold state funding from local school districts that also educated undocumented immigrant children was a violation of the 14th Amendment. The Court found that undocumented immigrants and their children are people “in any ordinary sense of the term” and are thus guaranteed equal protection under the law, including the right to not be unfairly barred from the public school system:

The Court based its ruling on the Fourteenth Amendment of the U.S. Constitution, which says in part, “No State shall … deny to any person within its jurisdiction the equal protection of the laws.” (This provision is commonly known as the “Equal Protection Clause.”) Under this provision, the Court held that if states provide a free public education to U.S. citizens and lawfully present foreign-born children, they cannot deny such an education to undocumented children without “showing that it furthers some substantial state interest.”

The Court found that the school district had no rational basis to deny children a public education based on their immigration status, given the harm the policy would inflict on the children themselves and society as a whole. “By denying these children a basic education,” the Court said, “we deny them the ability to live within the structure of our civic institutions, and foreclose any realistic possibility that they will contribute in even the smallest way to the progress of our Nation.” The Court also said that holding children accountable for their parents’ actions “does not comport with fundamental conceptions of justice.” [American Immigration Council,6/15/12]

Reader Art Seagal comments on the latest, most destructive fads in American education–destructive because they are mandatory and do not permit teacher judgment or professionalism.

Seagal writes:

I just read a telling article in an alumni magazine all about one man’s (Clayton Christensen) business concept – “disruptive innovation”. Sadly, our nation’s children and teachers have become pawns in a corporate-centric world being constantly moved over the chessboard so that opponent’s kings can be check-mated. “Edupreneurs” .. you pick from a string of them – the latest being David Coleman – are trying to play Christensen’s concept (which really is a statement of the obvious put through marketing and given a “brand”) to become the KING – the last man standing – the American Idol – the Survivor – the Bachelorette – you name it and the corporate world is going to find that “ONE PROFIT MAKING IDEAL that is going to be ON TOP (henceforth profitable) rendering everything before it useless. This may work for products??? Think cell phone and landline. But it certainly is not working for the basics of humanity – our quest to learn. Just the mere attempt to try to be the “disruptive innovator” is destroying public education (well there is a lot more contributing to this destruction too like poverty and a failing democratic process on a national level).

I mentioned before.. this era of “guru-ization”. Ravitch totally nails it in this recent article with the revolving door of “next best” and “this way or the highway” style public education that has taken professional control from teachers totally away and put it into the hands of what I will say are wanna be “disruptive innovators”. I am thankful for her existence on a daily basis!!!

We need to bring back teacher control. Yes, teachers who constantly keep updated and read about various education ideas and actually pick and choose those components they professionally feel will merit use in their particular classrooms. When you get a program like Balanced Literacy developed by someone with a lot of ed experience but it suddenly becomes THE ONE PROGRAM in NYC… it serves not to benefit but to disenfranchise because it is expected (no demanded by authorities) to be implemented in a one-size-fits all kind of way. The business model has perpetuated “guruization” by dangling the potential for enormous profit off of “that one idea” that goes forcefully viral. Let’s keep these ideas but not let the corporate world co-opt them!

Coleman’s theories need a good looking at by people who actually have education (not testing experience). Teachers are perfectly capable of looking at his ideas and tossing out everything that does not work. But this is not how it works. They must follow ALL OF IT despite their experience telling them otherwise. Dare I say this but if teachers were allowed to choose from their readings what and how to implement various components of various education ideas… success might be a lot more prevalent. And yes, most teachers I work with WANT TO GO TO PD’s that are not PR brainwashing events but one’s of their choosing that actually help them in the classroom. One fabulous teacher I know, paid on her own dime (as we usually do when we want REAL PD’s) and could not talk enough about a “brain and the young child” conference she attended (led by a neurologist). Instead we are forced to attend conferences where non educators are trained specifically to teach educators and their bosses are getting heaps of money to inflict nonsense on these teachers. These trainers never can answer the nitty gritty real questions that teachers ask because they have not had the requisite classroom experience. And quite often they are charged with selling their company’s “brand”. The superintendents meanwhile get to “check off” that their county’s teachers have been provided “essential training” from their superintendent’s “check-list” that satisfies likely a govt entity that provides funding to their county! Junk food PD’s.


I feel sorry for our “down under” friends… their govt.’s willingness to follow the US public education model truly will put their nation’s most valuable (their young) “down and under”.

Paul Thomas says that events are moving swiftly, and we must move with them.

When the corporate reform movement started, educators were taken by surprise and treated like children. When did it start? Was it the accountability movement that began after “A Nation at Risk” in 1983? Was it the passage of No Child Left Behind in 2001? Or the election of Michael Bloomberg in 2001 and years of pointing to the New York City “miracle”? Or the appointment in 2007 of Michelle Rhee in 2007, who was the darling of the media? Or the arrival of Race to the Top, which was no better than NCLB? Or the firing of the staff in Central Falls, Rhode Island, and the release of “Waiting for Superman” in 2010?

Thomas writes:

“Most of those accountability years, I would classify as Phase 1, a period characterized by a political monopoly on both public discourse and policy addressing primarily public K-12 education.

“We are now in Phase 2, a time in which (in many ways aided by the rise in social media—Twitter, blogging, Facebook—and the alternative press—AlterNet and Truthout) teachers, professors, and educational scholars have begun to create a resistance to the political, media, and public commitments to recycling false charges of educational failure in order to continue the same failed approaches to education reform again and again.

“In Phase 1, educators were subjected to the role of the child; we were asked to be seen but not heard.

“In Phase 2, adolescence kicked in, and we quite frankly began to experiment with our rebellious selves. In many instances, we have been pitching a fit—a completely warranted tantrum, I believe, but a tantrum nonetheless.”

Now we are in Phase 3, says Thomas. In Phase 3, we shift to substance, not just putting out fires. We are the adults. The reformers may hold the reins of power but they are in retreat as it turns out that none of their ideas actually works.

He says: “In short, as I have argued about the Common Core debate, the resistance has reached a point when we must forefront rational and evidence-based alternatives to a crumbling education reform disaster.

“We must be the adults in the room, the calm in the storm. It won’t be easy, but it is time for the resistance to grow up and take our next step.”

I am all for Phase 3, but I am not sure who will be convinced by rational and evidence-based alternatives. We have always had the evidence. We have known–even the reformers have known–that their reforms are causing a disaster. They believe in disruption as a matter of principle. How do we persuade them to consider reason and evidence? I think that Phase 3 commences when parents and educators wake up and throw the rascals out of office. In state after state, they are attacking public education, teachers , and the principle of equality of educational opportunity. The best way to stop them is to vote them out.

Michael Brown, the youth who was killed by police in Ferguson, Missouri, graduated from Normandy High School. You may recall reading here that the Normandy School District, which was 98% African-American, was merged by the state with the nearby Wellston School District, which was 100% African-American.

Michael’s graduation picture was taken in March 2014. Why so far ahead of the graduation date? The high school had only two graduation gowns, and they had to be shared by the entire class. Mark Sumner tells the story of Michael Brown’s high school on The Daily Kos, and it is heartbreaking.

“The grinding poverty in Mike’s world only allowed Normandy High School to acquire two graduation gowns to be shared by the entire class. The students passed a gown from one to the other. Each put the gown on, in turn, and sat before the camera to have their graduation photographs taken. Until it was Mike’s turn.

“What kind of American school would have to share robes across the entire senior class?
The kind that’s been the subject of a lot of attention from the state board of education.

“This district was created by merging two of the poorest, most heavily minority districts around St. Louis—Normandy and Wellston. The poverty rate for families sending their kids to Normandy Schools was 92 percent. At Wellston School District, the poverty rate was 98 percent. Every single student in the Wellston district was African American.

“Still, the state education board voted to merge the districts in 2010 (the first change to state school district boundaries in thirty-five years). Plagued by white flight, crashing property values that destroyed tax revenues, and a loss of state funds as the better-off residents of the area sent their children to private schools, the resulting district isn’t just short of gowns, it’s short of everything. Residents of the district voted again and again to raise their own property taxes, until their rates were actually the highest in the state, but a higher percentage of nothing was still nothing, and district revenues trended steadily down.”

And more:

“So who actually runs Michael Brown’s school district? Well, the president of the board of education is Peter F. Herschend of Branson, Missouri. Herschend isn’t a former teacher, or a former principal, and doesn’t have any training in the education field. He’s the owner of Herschend Family Entertainment, which runs Silver Dollar City and other amusement parks. He’s also one of the biggest contributors to the Republican Party in the state.

“So, when you’re wondering who runs Michael Brown’s school district—when you’re wondering who’s in control of an urban, minority district so poor that the students have only two graduation gowns to share—it’s a white Republican millionaire from out state.”

There have been far too many killings of unarmed young black men. The nation expressed shock when George Zimmerman fatally shot Trayvon Martin in Florida. The nation should be even more outraged when young men like Michael Brown in Ferguson, Missouri, are killed by the police. The U.S. Justice Department should set standards for the training of police officers so that the use of firearms is a last resort or a very rare occurrence. The police should be the protectors of the community, the keepers of the peace, not an armed force to be feared by young men of color.

It is time for Eric Holder, the Attorney General of the United States, to take the lead in not only demanding an end to the use of deadly force against young people but in setting national standards for police conduct and prosecuting police forces that terrorize people of color.

Here is an account worth reading.

Federal law states clearly that no agent of the federal government may seek to influence, direct, or control curriculum or instruction. For many decades, both parties agreed that they did not want the party in power to use federal power to control the schools of the nation. Thus, while it was appropriate for the U.S. Department of Education to use its funding to enforce Supreme Court decisions to desegregate the schools, it was prohibited from seeking to control curriculum and instruction. Both parties recognized that education is a state and local function, and neither trusted the other to impose its ideas on the schools.

 

That explains why Arne Duncan did not use federal funding to pay for the Common Core, but it does not explain why he used the power of his office to promote the CCSS or why he paid out some $350 million for tests specifically designed to test the Common Core standards. As every teacher knows, tests drive curriculum and instruction, especially when the tests are connected to high stakes.

 

In this post, Mercedes Schneider explains the battle royal in Louisiana, where Governor Jindal is fighting the State Commissioner of Education John White and the Board of Elementary and Secondary Education over the CCSS and the aligned PARCC tests. Now Jindal has decided to sue on grounds that the U.S. Department of Education acted illegally by aiding the creation of CCSS and the tests. The funniest part of the post , as Schneider writesis to see politicians accusing other politicians of acting like politicians.

 

I hope the underlying issues get a full airing. When I worked at the U.S. Department of Education in the early 1990s in the administration of President George H.W. Bush, we were much aware of the ban on federal involvement in curriculum and instruction. We funded voluntary national standards, but we kept our distance from the professional associations working to write them, and they were always described as voluntary national standards.

Lisa Woods, who has taught for 25 years, explains clearly in this post why schools will never run like businesses. It originally appeared. In the Greensboro (N.C.) News-Record.

 

She asks readers to imagine a job where one’s compensation depends one’s “job performance and value” depend on the following conditions:

 

 

 

 

 

“* You are meeting with 35 clients in a room designed to hold 20.

“* The air conditioning and/or heat may or may not be working, and your roof leaks in three places, one of which is the table where your customers are gathered.

“* Of the 35, five do not speak English, and no interpreters are provided.

“* Fifteen are there because they are forced by their “bosses” to be there but hate your product.

“* Eight do not have the funds to purchase your product.

“* Seven have no prior experience with your product and have no idea what it is or how to use it.

“* Two are removed for fighting over a chair.

“* Only two-thirds of your clients appear well-rested and well-fed.

“You are expected to:

“* Make your presentation in 40 minutes.

“* Have up-to-date, professionally created information concerning your product.

“* Keep complete paperwork and assessments of product understanding for each client and remediate where there is lack of understanding.

“* Use at least three different methods of conveying your information: visual, auditory and hands-on.

“The “criterion” for measuring your “worth and value” is that no less than 100 percent of your clients must buy and have the knowledge to assemble and use your product, both creatively and critically, and in conjunction with other products your company produces, of which you have working but limited knowledge

“Only half of the clients arrive with the necessary materials to be successful in their understanding of your product, and your presentation is disrupted at least five times during the 40 minutes.

“You have an outdated product manual and one old computer, but no presentation equipment. Your company’s budget has been cut every year for the past 10 years, the latest by a third. Does this mean you only create two-thirds of a presentation? These cuts include your mandatory training and presentation materials (current ones available to you are outdated by five years).

“You have no assistant, and you must do all the paperwork, research your knowledge deficiencies and produce professional-looking, updated materials during the 40 minutes allotted to you during the professional day. You cannot use your 30-minute lunch break. Half is spent monitoring other clients who are not your own.

“Your company cannot afford to train you in areas of its product line where you may be deficient, yet you are expected to have this knowledge and incorporate it into your product presentation in a meaningful way.

“You haven’t had a raise in eight years and your benefits have been purged, nor do you receive a commission for any product you sell. Do you purchase all the materials needed so your presentation is effective? Will you pay for the mandatory training necessary to do your job in a competent and professional manner?”

What business could succeed under those conditions?

This is an important article, which criticizes and deconstructs the notorious VAM study by Chetty et al. I refer to it as notorious because it was reported on the first page of the New York Times before it was peer-reviewed; it was immediately presented on the PBS Newshour; and President Obama referred to its findings in his State of the Union address only weeks after it first appeared.

These miraculous events do not happen by accident. The study made grand claims for the importance of value-added measures of teacher quality, a keystone of Obama’s education policy. One of the authors told the New York Times that the lesson of the study was to fire teachers sooner rather than later. A few months ago, the American Statistical Association reacted to the study, not harshly, but made clear that the study was overstated, that the influence of teachers on the variability of test scores ranged from 1-14%, and that changes in the system would likely have more influence on students’ academic outcomes than attaching the scores of students to individual teachers.

I have said it before, and I will say it again: VAM is Junk Science. Looking at children as machine-made widgets and looking at learning solely as standardized test scores may thrill some econometricians, but it has nothing to do with the real world of children, learning, and teaching. It is a grand theory that might net its authors a Nobel Prize for its grandiosity, but it is both meaningless in relation to any genuine concept of education and harmful in its mechanistic and reductive view of humanity.

CHETTY, ET AL, ON THE AMERICAN STATISTICAL ASSOCIATION’S RECENT POSITION STATEMENT ON VALUE-ADDED MODELS (VAMs): FIVE POINTS OF CONTENTION

by Margarita Pivovarova, Jennifer Broatch & Audrey Amrein-Beardsley — August 01, 2014

Over the last decade, teacher evaluation based on value-added models (VAMs) has become central to the public debate over education policy. In this commentary, we critique and deconstruct the arguments proposed by the authors of a highly publicized study that linked teacher value-added models to students’ long-run outcomes, Chetty et al. (2014, forthcoming), in their response to the American Statistical Association statement on VAMs. We draw on recent academic literature to support our counter-arguments along main points of contention: causality of VAM estimates, transparency of VAMs, effect of non-random sorting of students on VAM estimates and sensitivity of VAMs to model specification.

INTRODUCTION

Recently, the authors of a highly publicized and cited study that linked teacher value-added estimates to the long-run outcomes of their students (Chetty, Friedman, & Rockoff, 2011; see also Chetty, et al., in press I, in press II) published a “point-by-point” discussion of the “Statement on Using Value-Added Models for Educational Assessment” released by the American Statistical Association (ASA, 2014). This once again brought the value-added model (VAM) and its use for increased teacher and school accountability to the forefront of heated policy debate.

In this commentary we elaborate on some of the statements made by Chetty, et al. (2014). We position both the ASA’s statement and Chetty, et al.’s (2014) response within the current academic literature. As well, we deconstruct the critiques and assertions advanced by Chetty, et al. (2014) by providing counter-arguments and supporting them by the scholarly research on this topic.

In doing so, we rely on the current research literature that has really been done on this subject over the past ten years. This more representative literature was completely overlooked by Chetty, et al. (2014), even though, paradoxically, they criticize the ASA for not citing the “recent” literature appropriately themselves (p. 1). With this being our first point of contention, we also discuss four additional points of dispute within the commentary.

POINT 1: MISSING LITERATURES

In their critique of the ASA statement, posted on a university-sponsored website, Chetty, et al. (2014) marginalize the current literature published in scholarly journals on the issues surrounding VAMs and their uses for measuring teacher effectiveness. Rather, Chetty et al. cite only works representing econometrician’s scholarly pieces, apparently in support of their a priori arguments and ideas. Hence, it is important to make explicit the rather odd and extremely selective literature Chetty, et al. included in the reference section of their critique, on which Chetty, et al. relied “to prove” some of the ASA’s statements incorrect. The whole set of peer-reviewed articles that counter Chetty, et al.’s arguments and ideas are completely left out of their discussion.

A search on the Educational Resources Information Center (ERIC) with “value-added” as key words for the same last five years yields 406 entries, and a similar search in Journal Storage (JSTOR, a shared digital library) returns 495. Chetty, et al., however, only cite 13 references to critique the ASA’s statement, one of which was the actual statement itself, leaving 12 external citations in total and in support of their critique. Of these 12 external citations, three are references to their two forthcoming studies and a replication of these studies’ methods; three have thus far been published in peer-reviewed academic journals, six were written by their colleagues at Harvard University; and 11 were written by teams of scholars with economics professors/econometricians as lead authors.

POINT 2: CORRELATION VERSUS CAUSATION

The second point of contention surrounds whether the users of VAMs should be aware of the fact that VAMs typically measure correlation, not causation. According to the ASA, as pointed out by Chetty, et al. (2014), effects “positive or negative—attributed to a teacher may actually be caused by other factors that are not captured in by the model” (p. 2). This is an important point with major policy implications. Seminal publications on the topic, Rubin, Stuart and Zanutto (2004) and Wainer (2004) who positioned their discussion within the Rubin Causal Model framework (Rubin, 1978; Rosenbaum and Rubin, 1983; Holland, 1986), clearly communicated, and evidenced, that value-added estimates cannot be considered causal unless a set of “heroic assumptions” are agreed to and imposed. Moreover, “anyone familiar with education will realize that this [is]…fairly unrealistic” (Rubin, et al. 2004, p. 108). Instead, Rubin, et al. suggested, given these issues with confounded causation, we should switch gears and evaluate interventions and reward incentives as based on the descriptive qualities of the indicators and estimates derived via VAMs. This point has since gained increased consensus among other scholars conducting research in these areas (Amrein-Beardsley, 2008; Baker, et al., 2010; Betebenner, 2009; Braun, 2008; Briggs & Domingue, 2011; Harris, 2011; Reardon & Raudenbush, 2009; Scherrer, 2011).

POINT 3: THE NON-RANDOM ASSIGNMENT OF STUDENTS INTO CLASSROOMS

The third point of contention pertains to Chetty, et al.’s statement that recent experimental and quasi-experimental studies have already solved the “causation versus correlation” issue. This claim is made despite the substantive research that evidences how the non-random assignment of students constrains VAM users’ capacities to make causal claims.

The authors of the Measures of Effective Teaching (MET) study cited by Chetty, et al. in their critique, clearly state, “we cannot say whether the measures perform as well when comparing the average effectiveness of teachers in different schools…given the obvious difficulties in randomly assigning teachers or students to different schools” (Kane, McCaffrey, Miller & Staiger, 2013, p. 38). VAM estimates were found to be biased for teachers who taught more relatively homogenous sets of students with lower levels of prior achievement, despite the levels of sophistication in the statistical controls used (Hermann, Walsh, Isenberg, & Resch, 2013; see also Ehlert, Koedel, Parsons, & Podgursky, 2014; Guarino et al., 2012).

Researchers repeatedly demonstrated that non-random assignment confounds value-added estimates independent of how many sophisticated controls are added to the model (Corcoran, 2010; Goldhaber, Walch, & Gabele, 2012; Guarino, Maxfield, Reckase, Thompson, & Wooldridge, 2012; Newton, Darling-Hammond, Haertel, & Thomas, 2010; Paufler & Amrein-Beardsley, 2014; Rothstein, 2009, 2010).

Even in experimental settings, it is still not possible to distinguish between the effects of school practice, which is of interest to policy-makers, and the effects of school and home context. There are many factors at the student, classroom, school, home, and neighborhood levels that would confound causal estimates that are beyond researchers’ control. Thus, the four experimental studies cited by Chetty, et al. (2014) do not provide ample evidence to refute the ASA on this point.

POINT 4: ISSUES WITH LARGE-SCALE STANDARDIZED TEST SCORES

In their position statement, ASA authors (2014) rightfully state that the standardized test scores used in VAMs should not be the only outcomes of interest for policy makers and stakeholders. Indeed, current agreement is that test scores might not even be one of the most important outcomes capturing a student’s educated self. Also, if value-added estimates from standardized test scores cannot be interpreted as causal, then the effect of “high value-added” teachers on college attendance, earnings, and reduced teenage birth rates cannot be considered causal either as opposed to what is implied by Chetty, et al. (2011; see also Chetty, et al., in press I, in press II).

Ironically, Chetty, et al. (2014) cite Jackson’s (2013) study to confirm their point that high value-added teachers also improve long-run outcomes of their students. Jackson (2013), however, actually found that teachers who are good at boosting test scores are not always the same teachers who have positive and long-lasting outcomes on non-cognitive skills acquisition. Moreover, value-added as related to test scores and non-cognitive outcomes for the same teachers were then, and have since been shown to be, weakly correlated with one another.

POINT 5: MODEL SPECIFICITY

Lastly, ASA (2014) expressed concerns about the sensitivity of value-added estimates to model specifications. Recently, researchers have found that value-added estimates are highly sensitive to the tests being used, even within the same subject areas (Papay, 2011) and the different subject areas taught by the same teachers given different student compositions (Loeb & Candelaria, 2012; Newton, et al., 2010; Rothstein, 2009, 2010). While Chetty, et al. rightfully noted that different VAMs typically yield correlations around r = 0.9, this is typical with most “garbage in, garbage out” models. These models are too often used, too often without question, to process questionable input and produce questionable output (Banchero & Kesmodel, 2011; Gabriel & Lester, 2012, 2013; Harris, 2011).

What Chetty, et al. overlooked, though, are the repeatedly demonstrated weak correlations between value-added estimates and other indicators of teacher quality, on average between r = 0.3 and 0.5 (see also Corcoran, 2010, Goldhaber et al., 2012; McCaffrey, Sass, Lockwood, & Mihaly, 2009; Broatch and Lohr, 2012; Mihaly, McCaffrey, Staiger, & Lockwood, 2013).

CONCLUSION

In sum, these are only a few “points” from this “point-by-point discussion” that would strike anyone even fairly familiar with the debate over the use and abuse of VAMs. These “points” are especially striking given the impact Chetty, et al.’s original (2011) study and now forthcoming studies (Chetty, et al., in press I, in press II) have already had on actual policy and the policy debates surrounding VAMs. Chetty, et al.’s (2014) discussion of the ASA statement, however, should cause others pause in terms of whether in fact Chetty, et al. are indeed experts in the field, or not. What certainly has become evident is that they do not have their minds wrapped around the extensive set of literature or knowledge on this topic. If they had, they may not have come off as so selective, as well as biased, citing only those representing certain disciplines and certain studies to support certain assumptions and “facts” upon which their criticisms of the ASA statement were based.

References

American Statistical Association. (2014). ASA Statement on using value-added models for educational assessment. Retrieved from http://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf

Amrein-Beardsley, A. (2008). Methodological concerns about the Education Value-Added Assessment System (EVAAS). Educational Researcher, 37(2), 65–75. doi: 10.3102/0013189X08316420

Baker, E. L., Barton, P. E., Darling-Hammond, L., Haertel, E., Ladd, H. F., Linn, R. L., Ravitch, D., Rothstein, R., Shavelson, R. J., & Shepard, L. A. (2010). Problems with the use of student test scores to evaluate teachers. Washington, D.C.: Economic Policy Institute. Retrieved from http://www.epi.org/publications/entry/bp278

Banchero, S. & Kesmodel, D. (2011, September 13). Teachers are put to the test: More states tie tenure, bonuses to new formulas for measuring test scores. The Wall Street Journal. Retrieved from http://online.wsj.com/article/SB10001424053111903895904576544523666669018.html

Betebenner, D. W. (2009b). Norm- and criterion-referenced student growth. Education Measurement: Issues and Practice, 28(4), 42-51. doi:10.1111/j.1745-3992.2009.00161.x

Braun, H. I. (2008). Viccissitudes of the validators. Presentation made at the 2008 Reidy Interactive Lecture Series, Portsmouth, NH. Retrieved from http://www.cde.state.co.us/cdedocs/OPP/HenryBraunLectureReidy2008.ppt

Briggs, D. & Domingue, B. (2011, February). Due diligence and the evaluation of teachers: A review of the value-added analysis underlying the effectiveness rankings of Los Angeles Unified School District Teachers by the Los Angeles Times. Boulder, CO: National Education Policy Center. Retrieved from nepc.colorado.edu/publication/due-diligence

Broatch, J. and Lohr, S. (2012) “Multidimensional Assessment of Value Added by Teachers to Real-World Outcomes”, Journal of Educational and Behavioral Statistics, April 2012; vol. 37, 2: pp. 256–277.

Chetty, R., Friedman, J. N., & Rockoff, J. E. (2011). The long-term impacts of teachers: Teacher value-added and student outcomes in adulthood. Cambridge, MA: National Bureau of Economic Research (NBER), Working Paper No. 17699. Retrieved from http://www.nber.org/papers/w17699

Chetty, R., Friedman, J. N., & Rockoff, J. (2014). Discussion of the American Statistical Association’s Statement (2014) on using value-added models for educational assessment. Retrieved from http://obs.rc.fas.harvard.edu/chetty/ASA_discussion.pdf

Chetty, R., Friedman, J. N., & Rockoff, J. E. (in press I). Measuring the impact of teachers I: Teacher value-added and student outcomes in adulthood. American Economic Review.

Chetty, R., Friedman, J. N., & Rockoff, J. E. (in press II). Measuring the impact of teachers II: Evaluating bias in teacher value-added estimates. American Economic Review.

Corcoran, S. (2010). Can teachers be evaluated by their students’ test scores? Should they be? The use of value added measures of teacher effectiveness in policy and practice. Educational Policy for Action Series. Retrieved from: http://files.eric.ed.gov/fulltext/ED522163.pdf

Ehlert, M., Koedel, C., Parsons, E., & Podgursky, M. J. (2014). The sensitivity of value-added estimates to specification adjustments: Evidence from school- and teacher-level models in Missouri. Statistics and Public Policy. 1(1), 19–27.

Gabriel, R., & Lester, J. (2012). Constructions of value-added measurement and teacher effectiveness in the Los Angeles Times: A discourse analysis of the talk of surrounding measures of teacher effectiveness. Paper presented at the Annual Conference of the American Educational Research Association (AERA), Vancouver, Canada.

Gabriel, R. & Lester, J. N. (2013). Sentinels guarding the grail: Value-added measurement and the quest for education reform. Education Policy Analysis Archives, 21(9), 1–30. Retrieved from http://epaa.asu.edu/ojs/article/view/1165

Goldhaber, D., & Hansen, M. (2013). Is it just a bad class? Assessing the long-term stability of estimated teacher performance. Economica, 80, 589–612.

Goldhaber, D., Walch, J., & Gabele, B. (2012). Does the model matter? Exploring the relationships between different student achievement-based teacher assessments. Statistics and Public Policy, 1(1), 28–39.

Guarino, C. M., Maxfield, M., Reckase, M. D., Thompson, P., & Wooldridge, J.M. (2012, March 1). An evaluation of Empirical Bayes’ estimation of value-added teacher performance measures. East Lansing, MI: Education Policy Center at Michigan State University. Retrieved from http://www.aefpweb.org/sites/default/files/webform/empirical_bayes_20120301_AEFP.pdf

Harris, D. N. (2011). Value-added measures in education: What every educator needs to know. Cambridge, MA: Harvard Education Press.

Hermann, M., Walsh, E., Isenberg, E., & Resch, A. (2013). Shrinkage of value-added estimates and characteristics of students with hard-to-predict achievement levels. Princeton, NJ: Mathematica Policy Research. Retrieved form http://www.mathematica-mpr.com/publications/PDFs/education/value-added_shrinkage_wp.pdf

Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945–960.

Jackson, K. C. (2012). Non-cognitive ability, test scores, and teacher quality: Evidence from 9th grade teachers in North Carolina. Cambridge, MA: National Bureau of Economic Research (NBER), Working Paper No. 18624. Retrieved from http://www.nber.org/papers/w18624

Kane, T., McCaffrey, D., Miller, T. & Staiger, D. (2013). Have we identified effective teachers? Validating measures of effective teaching using random assignment. Bill and Melinda Gates Foundation. Retrieved from http://www.metproject.org/downloads/MET_Validating_Using_Random_Assignment_Research_Paper.pdf

Loeb, S., & Candelaria, C. (2013). How stable are value-added estimates across
years, subjects and student groups? Carnegie Knowledge Network. Retrieved from http://carnegieknowledgenetwork.org/briefs/value‐added/value‐added‐stability

McCaffrey, D. F., Sass, T. R., Lockwood, J. R., & Mihaly, K. (2009). The intertemporal variability of teacher effect estimates. Education Finance and Policy, 4, 572–606.

Mihaly, K., McCaffrey, D., Staiger, D. O., & Lockwood, J.R. (2013). A
composite estimator of effective teaching. Seattle, WA: Bill and Melinda Gates Foundation. Retrieved from: http://www.metproject.org/downloads/MET_Composite_Estimator_of_Effective_Teaching_Research_Paper.pdf

Newton, X. A., Darling-Hammond, L., Haertel, E., & Thomas, E. (2010). Value added modeling of teacher effectiveness: An exploration of stability across models and contexts. Educational Policy Analysis Archives, 18(23). Retrieved from: epaa.asu.edu/ojs/article/view/810.

Papay, J. P. (2010). Different tests, different answers: The stability of teacher value-added estimates across outcome measures. American Educational Research Journal, 48(1), 163–193.

Paufler, N. A., & Amrein-Beardsley, A. (2014). The random assignment of students into elementary classrooms: Implications for value-added analyses and interpretations. American Educational Research Journal.

Reardon, S. F., & Raudenbush, S. W. (2009). Assumptions of value-added models for estimating school effects. Education Finance and Policy, 4(4), 492–519. doi:10.1162/edfp.2009.4.4.492

Rosenbaum, P., & Rubin, D. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 17, 41–55.

Rothstein, J. (2009). Student sorting and bias in value-added estimation: Selection on observables and unobservables. Education Finance and Policy, (4)4, 537–571. doi:http://dx.doi.org/10.1162/edfp.2009.4.4.537

Rothstein, J. (2010, February). Teacher quality in educational production: Tracking, decay, and student achievement. Quarterly Journal of Economics. 175–214. doi:10.1162/qjec.2010.125.1.175

Rubin, D. B. (1978). Bayesian inference for causal effects: The role of randomization. The Annals of Statistics, 6, 34–58

Rubin, D. B., Stuart, E. A., & Zanutto, E. L. (2004). A potential outcomes view of value-added assessment in education. Journal of Educational and Behavioral Statistics, 29(1), 103–116.

Scherrer, J. (2011). Measuring teaching using value-added modeling: The imperfect panacea. NASSP Bulletin, 95(2), 122–140. doi:10.1177/0192636511410052

Wainer, H. (2004). Introduction to a special issue of the Journal of Educational and Behavioral Statistics on value-added assessment. Journal of Educational and Behavioral Statistics, 29(1), 1–3. doi:10.3102/10769986029001001

Cite This Article as: Teachers College Record, Date Published: August 01, 2014
http://www.tcrecord.org ID Number: 17633, Date Accessed: 8/10/2014 8:23:06 AM

I don’t put a lot of credibility in state rankings, except to the extent that it shows state officials where they need to make improvements. I have a hard time imagining any family saying, “Hey, I just saw this ranking of states. Let’s move from Mississippi to New Jersey.”

And then there is the problem of conflicting rankings. The states that Michelle Rhee ranked among the best came in poorly in the Wallethub survey. Move to Louisiana if you believe Rhee, but move to New Jersey if you believe Wallethub.

Wallethub is a financial services company that ranks stuff. In this survey, they took 12 factors into account, such as dropout rates, test scores, pupil/teacher ratios, bullying incidents, percentage of population over 25 with a bachelor’s degree or higher. The survey counts the availability of online public schools as a plus, but this is an instance where greater discrimination is needed to draw a line between genuine online public schools and get-rich-quick online scams.

The reason that surveys like this fall short is that there are good and not-good schools in every state. The reason that a survey like this is valuable is that the State Commissioner of Vermont Rebecca Holcombe could point to Vermont as the third best state in the nation on the Wallethub survey, as a way to resist the pressure from Washington to declare every school in Vermont a low-performing school.

I bet there are many excellent schools in states that fall at the bottom of anyone’s rating.

Valerie Strauss shows in this post that there were NO gains in reading in the District of Columbia Public Schools during the tenure of Michelle Rhee and her successor Kaya Henderson. G.F. Brandenburg noted these facts on his blog on July 31. Brandenburg asks: “So where are all those increases that Michelle Rhee promised in writing?”

Strauss writes that this is more than just a personal failure. This is a failure of the entire reform strategy.

Bernie Horn of the Public Leadership Institute writes:

“If this isn’t failure, what is?

“The latest results of the DC-CAS, the District of Columbia’s high-stakes standardized test, show that the percentage of public school students judged “proficient” or better in reading has declined over the past five years in every significant subcategory except “white.”

“This is important, and not just for Washington, D.C. It is an indictment of the whole corporatized education movement. During these five years, first Michelle Rhee and then her assistant/successor Kaya Henderson controlled DCPS and they did everything that the so-called “reformers” recommend: relying on standardized tests to rate schools, principals and teachers; closing dozens of schools; firing hundreds of teachers and principals; encouraging the unchecked growth of charters; replacing fully-qualified teachers with Teach For America and other non-professionals; adopting teach-to-the-test curricula; introducing computer-assisted “blended learning”; increasing the length of the school day; requiring an hour of tutoring before after-school activities; increasing hours spent on tested subjects and decreasing the availability of subjects that aren’t tested. Based on the city’s own system of evaluation, none of it has worked.”

There were no gains, no miracles. Except for a very small improvement in the proficiency rates of white students, every other category declined: low-income students declined; black students declined; Hispanic students declined; Special education students declined. Whites saw a small uptick of 1.6% from 2009-2014.

Horn writes:

“In truly Orwellian fashion, DCPS presents these disastrous numbers under the heading “Long-term progress in Reading has been maintained.” The Mayor, the DCPS Chancellor, and the powers-that-be all act like there’s nothing wrong.

“But clearly, this is what failure looks like. If a school had scores like this over the past five years, it would be targeted for closure. If principals or teachers had scores like this, they would be fired. If a student had scores like this, s/he would be made to feel like a failure. Where is the accountability in this supposedly “data-driven” system?”

Yet remember that TIME magazine had a cover story on December 8, 2008, about Michelle Rhee (written by Amanda Ripley) saying that this was the woman who knew how to “fix” America’s schools?

Does Michelle Rhee know how to “fix” America’s schools? There is no evidence that she does. She didn’t do it in D.C. She is still collecting millions of dollars from unnamed donors to persuade legislators to follow her disastrous strategies.

We should know by now that the data-driven, test-driven approach doesn’t work. We should know by now that schools need experienced teachers and leaders to help children and new teachers. We should know by now that schools need stability and constancy of purpose, not disruption and high teacher turnover. We should immediately end the war on public schools and teachers and give our schools the resources they need and give our professionals the respect they deserve.

Follow

Get every new post delivered to your Inbox.

Join 108,793 other followers