Archives for category: NAEP

The Brookings Institution was once known as a reliable source of thoughtful, informed analysis of important policy issues. In the past decade, it has turned its education commentary over to rightwing ideologues, who are driven by ideology and indifferent to facts that they ought to know.

 

On behalf of Brookings, Jonathan Rothwell, economist for Gallup, complains that the U.S. spends more on education, has seen no improvement in decades, and is seeing no gains in productivity. He ends by saying that low-income families can’t afford private tutors or home schooling, as though these were viable ways to improve education for the poorest children .

 

I can’t unpack all this in a short space, but I would like to show you in a few paragraphs why this is an uninformed article. To begin with, Rothwell cherrypicks the data on test scores. This makes his analysis misleading and wrong. Test scores are the highest they have ever been on the only longitudinal measure we have: the National Assessment of Educational Progress (NAEP). He selectively quotes one version of the NAEP, while ignoring the other.

 

There are actually two versions of NAEP. One is called the “Long Term Trend” (LTT) data, the other is main NAEP. The LTT is offered every four years to samples of students at age 9, 13, and 17. Main NAEP is given every other year to students in grades 4 and 8.

 

LTT contains questions that are unchanged since the early 1970s and have no relation to what students are taught today. Occasionally, questions are deleted because their content is obsolete (e.g., a question that refers to S&H Green Stamps). The data for 17-year-olds is especially dubious because this group has no incentive to take the NAEP tests seriously.

 

The National Assessment Governing Board is aware of the problem of low motivation among 17-year-olds. When I served on the board, from 1994-2001, we devoted a large part of one of our quarterly meetings to this problem. There was talk of incentives, pizza parties, cash, but it was not resolved. The bottom line, however, is that any data about the test scores of 17-year-olds must acknowledge that this group doesn’t care about the test because they know it doesn’t matter. What the board learned when we discussed it is that some 17-year-olds doodle on the answer sheet or answer every question by checking the same letter. They don’t care.

 

I recommend that Rothwell read Chapter 5 of my book Reign of Error. He would learn there that the scores on the main NAEP reached their highest point ever in 2013 (they were flat for the first time in many years in 2015). This was true for every group of whites, blacks, Hispanics, and Asians. He would also learn that the graduation rate was the highest ever for these groups, and the dropout rate was the lowest ever. He would see a different reading of the LTT data, showing a dramatic rise in test scores in math for black students and Hispanic students in all three age groups, and for white students at ages 9 and 13, from 1973 to 2008. Even white 17-year-olds saw a gain, but it was small.

 

If I may quote my analysis, based on a review of both versions of NAEP, “NAEP data show beyond question that test scores in reading and math have improved for almost every group of students over the past two decades: slowly and steadily in the case of reading, dramatically in the case of mathematics.”

 

I would also urge Rothwell to read chapter 7, which reviews the international test scores. It shows that we were never #1 in test scores on international tests. In fact, when the first international tests were given in 1964, we were last among 12 nations. Yet over the half century that followed, we outpaced all the other 11 nations by every measure.

 

I know that Brookings uses Google or some other search engine to find anything that quotes its articles and research. I hope that they find this article and bring it to the attention of Jonathan Rothwell.

 

More important, I can only hope and wish that Brookings would make the effort to employ genuine education researchers to write and declaim about this important subject. Over the past decade, its education spokesman was Grover Whitehurst, George W. Bush’s former research director, who turned Brookings into a cheerleading think tank for school choice. This is unworthy of a once-great and once-trusted institution.

 

Caitlin Emma, Benjamin Wermund, and Kimberly Hefling, staff writers at politico.com, took a close look at Michigan and answered the question, what hath Betsy DeVos’s obsession with choice done to the schools of Michigan?

 

Unless you are a choice fanatic like DeVos, the answer is not encouraging.

 

Despite two decades of charter-school growth, the state’s overall academic progress has failed to keep pace with other states: Michigan ranks near the bottom for fourth- and eighth-grade math and fourth-grade reading on a nationally representative test, nicknamed the “Nation’s Report Card.” Notably, the state’s charter schools scored worse on that test than their traditional public-school counterparts, according to an analysis of federal data.

 

Critics say Michigan’s laissez-faire attitude about charter-school regulation has led to marginal and, in some cases, terrible schools in the state’s poorest communities as part of a system dominated by for-profit operators. Charter-school growth has also weakened the finances and enrollment of traditional public-school districts like Detroit’s, at a time when many communities are still recovering from the economic downturn that hit Michigan’s auto industry particularly hard.

 

The results in Michigan are so disappointing that even some supporters of school choice are critical of the state’s policies.

 

“The bottom line should be, ‘Are kids achieving better or worse because of this expansion of choice?’” said Michigan State Board of Education President John Austin, a DeVos critic who also describes himself as a strong charter-school supporter. “It’s destroying learning outcomes … and the DeVoses were a principal agent of that.”

 

The links are in the article, as well as a puzzle. Check out the link to CREDO at Stanford (funded by the Walton Foundation), which issued a report on Michigan charters and praised them extensively. How does the CREDO finding make sense to Michigan’s low standing on the National Assessment of Education Progress? How does it make sense in light of the fact that Detroit is the worst-performing urban district tested by NAEP?

 

 

The federal test, the National Assessment of Education Progress–called “The Nation’s Report Card”–will begin testing “grit.” Grrr.

Politico.com reports:

QUESTIONING GRIT: How do issues such as “grit” – or perseverance – and school climate correlate to student test performance? Following next year’s National Assessment of Educational Progress, researchers hope to have more clues. The student questionnaire on NAEP in 2017 will include a series of questions that address what are known as “non-cognitive” skills. By asking several questions within the topic areas, acting NCES Commissioner Peggy Carr tells Morning Education they will get “multiple chances to get the best measurement possible.” Researchers will take the answers to create an index that’s compared with test scores to create a composite designed to reflect the relationship between factors such as grit and academic achievement. Students will also be asked about technology use and their socioeconomic status.

– A glance at questions piloted earlier this year suggests what researchers are looking for [http://1.usa.gov/296dnLY ], although how the questionnaire will look next year could change. One pilot question, for example, asked fourth graders if they have a “problem while working toward an important goal, how well can you keep working?” Another question asks how often they “felt left out of things at school?” The questionnaire typically takes students about 15 minutes to fill out, and is optional.

– These types of questions have long helped researchers and policymakers to better understand student learning, said Bill Bushaw, the executive director of the National Assessment Governing Board. He said no personally-identifying information is collected from the sample of students taking NAEP, and no questions are asked about personal beliefs or religion. “It really helps understand the differences in academic achievement and that’s really what this is about,” Bushaw said. For example, in the past, Bushaw said a question that asked students how many times they’d been absent in the last month led researchers to conclude that students who miss a lot of school have lower achievement. School districts have responded by focusing more on school attendance.

Just think. We will soon know which state has the grittiest students. Which race and ethnicity and gender needs more grit. Where the grit gap is. The possibilities for research are amazing!

Tom Loveless of the Brookings Institution has studied student achievement for many years. He has written several reports on the National Assessment of Educational Progress (NAEP). Before earning his doctorate, he taught sixth grade in California.

In this post, he explains why “reformers” who confuse NAEP’s “proficient level” with “grade level” are wrong.

This claim has been asserted by pundits like Campbell Brown of The 74, Michelle Rhee, and organizations such as Achieve. They want the public to believe that our public schools are failing miserably, and our kids are woefully dumb. But Loveless shows why they are wrong.

He writes:

Equating NAEP proficiency with grade level is bogus. Indeed, the validity of the achievement levels themselves is questionable. They immediately came under fire in reviews by the U.S. Government Accountability Office, the National Academy of Sciences, and the National Academy of Education.[1] The National Academy of Sciences report was particularly scathing, labeling NAEP’s achievement levels as “fundamentally flawed.”

Despite warnings of NAEP authorities and critical reviews from scholars, some commentators, typically from advocacy groups, continue to confound NAEP proficient with grade level. Organizations that support school reform, such as Achieve Inc. and Students First, prominently misuse the term on their websites. Achieve presses states to adopt cut points aligned with NAEP proficient as part of new Common Core-based accountability systems. Achieve argues that this will inform parents whether children “can do grade level work.” No, it will not. That claim is misleading.

The expectation that all students might one day reach 100% proficiency on the NAEP is completely unrealistic. It has not happened in any other country, including the highest performing. Not even our very top students taught by our very best teachers haven’t reached 100% proficiency. This is a myth that should be discarded.

Loveless goes even farther and insists that NAEP achievement levels should not be the benchmark for student progress.

He warns:

Confounding NAEP proficient with grade-level is uninformed. Designating NAEP proficient as the achievement benchmark for accountability systems is certainly not cautious use. If high school students are required to meet NAEP proficient to graduate from high school, large numbers will fail. If middle and elementary school students are forced to repeat grades because they fall short of a standard anchored to NAEP proficient, vast numbers will repeat grades.

Anyone who claims that NAEP proficient is the same as grade level should not be taken seriously. Loveless doesn’t point out that the designers of the Common Core tests decided to align their “passing mark” with NAEP proficient, which explains why 70% of students typically fails the PARCC and the SBAC tests. Bear in mind that the passing mark (the cut score) can be arbitrarily set anywhere–so that all the students “pass,” no students pass, or some set percentage will pass. That’s because the questions have been pre-tested, and test developers know their level of difficulty. And that is why U.S. Secretary of Education John King, when he was New York Commissioner of Education, predicted that only 30% of the students who took the state tests would “pass.” He was uncannily accurate because he already knew that the test was designed to “fail” 70%.

He concludes:

NAEP proficient is not synonymous with grade level. NAEP officials urge that proficient not be interpreted as reflecting grade level work. It is a standard set much higher than that. Scholarly panels have reviewed the NAEP achievement standards and found them flawed. The highest scoring nations of the world would appear to be mediocre or poor performers if judged by the NAEP proficient standard. Even large numbers of U.S. calculus students fall short.

As states consider building benchmarks for student performance into accountability systems, they should not use NAEP proficient—or any standard aligned with NAEP proficient—as a benchmark. It is an unreasonable expectation, one that ill serves America’s students, parents, and teachers–and the effort to improve America’s schools.

Mercedes Scheider followed the contretemps around Campbell Brown’s misuse of NAEP data. She decided she would try to educate Campbell.

 

Mercedes goes through NAEP scores over many years and shows how they went up nicely until 2015, when they stagnated at about 1/3 proficient. She patiently explains that until 2015, scores were going up, not declining as Campbell believes.

 

Now for Campbell to understand this, she is going to have to read it and think about it.

 

Will she?

 

Will she correct her error about 2/3 of US kids in 8th grade being “below grade level”? It is not true.

Carol Burris wrote a post for Valerie Strauss’s “Answer Sheet” blog at the Washington Post, in which she reported the numerous Twitter exchanges among herself, Tom Loveless, and Campbell Brown. Burris and Loveless fruitlessly tried to persuade Brown to retract her inaccurate statement that 2/3 of America’s eighth graders score below grade level.

 

Given an opportunity to respond by Valerie Strauss, Brown made an incomprehensible statement about how she should have referred to grade-level proficiency instead of grade level. Then everything would be okay. Instead of correcting her error, Brown insisted she was under personal attack.

 

Please read the last two sentences of her comments, which are hilarious. Especially the reference to “the age of Donald Trump and Diane Ravitch” (Sic)!! And then there is her laughable claim that those who disagree with her negative comments are profiting from school failure. I wish she–who received $4 million to start her website–would provide evidence for that statement!

 

 

The rheeform leadership has changed. Michelle Rhee was once the cover girl for test-and-punish reform, and now it is Campbell Brown. The telegenic Brown used to read the news on television but now she has taken Rhee’s place in the reformy firmament. Since she launched her career as an education expert with an op-ed attacking the teachers’ union in New York City for protecting sexual predators, Brown has become increasingly active in the world of education punditry. She received $4 million from various billionaires to launch a news site called “The 74,” which was supposed to refer to the number of school-age children in the United States. However, there are 50 million school-age children, but then why quibble? Brown organized candidate debates for both parties last fall. Three Republicans showed up, and no Democrats. Yesterday, she moderated a panel at the Harvard Graduate School of Education at a symposium on poverty and schooling.

 

Now Brown, having established her bona fides as an expert on education, has prepared a memo for the next president. 

 

Unfortunately her memo begins with a false statement. She starts by saying that 2/3 of American students in eighth grade are “below grade level” in reading and math. Apparently she refers to the National Assessment of Education Progress, the only national assessment of student skills. She confuses NAEP proficiency, a specific achievement level, with grade level.

 

To begin with, “grade level” is a median. Fifty percent are always above grade level, and fifty percent are always below.

 

But the NAEP achievement levels do not measure “grade level.” They are defined in the NAEP reports thus: “basic” represents partial mastery of skills; “proficiency” represents mastery; “advanced” represents extraordinary performance. “Below basic” is very poor performance.

 

Here are the definitions on the NAEP website:

 

 

Achievement Level Policy Definitions
Basic

 

Partial mastery of prerequisite knowledge and skills that are fundamental for proficient work at each grade.
Proficient

Solid academic performance for each grade assessed. Students reaching this level have demonstrated competency over challenging subject matter, including subject-matter knowledge, application of such knowledge to real-world situations, and analytical skills appropriate to the subject matter.
Advanced

 

Superior performance.

 

Here is a statement on the U.S. Department of Education (National Center for Education Statistics) website:

 

https://nces.ed.gov/nationsreportcard/studies/statemapping/faq.aspx

The statement, “Proficient is not synonymous with grade-level performance.”

 

The NAEP website says that the Governing Board thinks that the goal should be “proficient,” not “basic,” but the reality is that these achievement levels have been in place since 1992, and in no state or district has 100% of students ever achieved NAEP proficiency. In only one state, Massachusetts, has as much as 50% of students reached proficiency. If you believe, as Campbell Brown and the NAGB board does, that 100% of students should reach proficiency, then you believe that somewhere there is a baseball team that never loses a game, or an entire school district in which all children get grades of all As. It has never happened, not even in the wealthiest, most successful schools and districts. When elephants can fly, that is when “all” students will reach NAEP proficiency. Be it noted that the standard (the passing mark or cut score) for the Common Core tests is aligned with NAEP proficient, which is why 65-70% of students consistently “fail.”

 

I was a member of the National Assessment Governing Board for seven years. I read questions before they were administered to samples of students across the nation and in every state. The greatest number of students are scored as “basic,” which I consider to be the equivalent of a B or C. Those who register as “proficient” are the equivalent of an A performance. Advanced is for superstars. Typically, only 5-10% of students are “advanced.” About a third are proficient or advanced. The remaining 65% are basic or below basic. (These are my definitions, not the government’s or the NAGB board.)

 

To expect that most students will score the equivalent of an A is nonsensical.

 

Ms. Brown has been engaged in a Twitter debate with Tom Loveless of the Brookings Institution. Loveless, a real expert with years of teaching experience (elementary school) and a doctorate, has been studying and writing about NAEP and student performance for many years. He chastised Brown on Twitter for saying that 2/3 of students are “below grade level.” He encouraged her to check her facts, because he assumed that her journalistic background had taught her to do so. Carol Burris, director of the Network for Public Education, and veteran educator, jumped into the exchange.

 

Just yesterday, Brown responded with this comment:

 

Campbell Brown ‏@campbell_brown 
@carolburris @tomloveless99 this is why parents dont listen to u. U play semantics while 2/3 kids arent where they should be. I call BS

 

Since Brown thinks that NAEP proficiency is the same as “grade level,” she would profit by reading this report on the meaning of NAEP achievement levels. It gives a good overview of them and points out that they do not refer to grade levels. The report also usefully reviews the numerous critiques of the achievement levels, by experts who consider them “fundamentally flawed” and an inaccurate measure of student achievement.

 

I can only hope that Ms. Brown, education expert, gets a quick tutorial about what NAEP achievement levels are.

 

And I invite her to take the NAEP eighth-grade test, composed of released questions in reading and math, and release her scores. In a supervised setting, of course. I think she will be surprised. I will be interested to see if she is “proficient,” since she believes that anyone who is not proficient is a failure.

 

 

The recent release of the test scores of seniors on the National Assessment of Educational Progress reported that low-performing students suffered the biggest declines.

“Much like their 4th and 8th grade peers, high school seniors have lost ground in math over the last two years, according to the most recent scores on a national achievement test.

“In reading, 12th grade scores remained flat, continuing a trend since 2009.

“Perhaps the most striking detail in the test data, though, is that the lowest achievers showed large score drops in both math and reading. Between 2013 and 2015, students at or below the 10th percentile in reading went down an average of 6 points on the National Assessment for Educational Progress—the largest drop in a two-year period since 1994. The high achievers, on the other hand—those at or above the 90th percentile—did significantly better in reading, gaining two points, on average, while staying stagnant in math.”

John Thompson knows that reformers point to the District of Columbia as one of their examples of success. After all, the district has been controlled by Teach for America alumnae Michelle Rhee and Kaya Henderson since 2007. They own whatever successes and not-successes that occurred over the past eight years. The centerpiece of their claims of success is NAEP scores, which are up.

 

In this post, Thompson identifies the flaws in the narrative of success. Thompson lauds John Merrow for critiquing the narrative of a district he once held up as an exemplar of successful reform. Merrow asked, in his post, why anyone was celebrating Kaya Henderson’s five-year anniversary in the wake of the disastrous scores on the Common Core PARCC tests, which showed a district where academic performance was dismal.

 

Thompson reviews the NAEP scores, using Rick Hess’s data.

 

Hess cites overall gains in NAEP growth under Rhee and Henderson, but those same NAEP studies actually support the common sense conclusion that the numbers reflect gentrification. Hess’s charts show that from 2005 to 2013, the percentage of D.C. students who are low-income dropped from 66% to 61.6%. (In my world, a 61.6% low-income urban school seems danged-near rich.) Per student spending increased by 40% during that time. (The new spending, alone, comes close to the total per student spending in my 90% low-income system.)

 

According to Hess’s chart, the percentage of the D.C. students who are black dropped by 1/8th from 2005 to 2013, and the percentage of students with disabilities dropped by 1/7th. And, the 2015 NAEP excluded as many as 44% of D.C.’s English Language Learners. The conservative reformer RiShawn Biddle calls that exclusion “massive and unacceptable test-cheating.”

 

Even so, as Merrow reminds us, the performance gap between low-income and more affluent students has grown even wider; for instance, from 2002 to 2015, the 8th grade reading performance gap grew from 17 to 48 points.

 

Before Rhee/Henderson, the growth in D.C. test scores was spread much more widely. Because I believe that 8th grade reading is the most important NAEP metric in terms of evaluating school performance, I will cite some of those metrics in support of Merrow. From 1998 to 2002, black 8th grade reading scores increased from an average of 233 to 238. By 2015, they were down to 236. From 1998 to 2002, average 8th grade reading scores for low-income students increased from 229 to 233. In 2015, they remained at 233.

 

Thompson says it is sad that the elites now re-engineering public education are utterly disconnected from the lives and realities of the children who attend those schools or the people who teach in them. They need a reality check, or maybe a course in sociocultural sensitivity training so that they stop stepping on the faces of children and adults whose lives they know nothing about.

 

 

News flash! There is a national test that enables us to compare reading and math scores for every state! It is called NAEP. It reports scores by race, ELLs, poverty, gender, disability status, achievement gaps. This is apparently unknown to the Néw York Times and the Secretary of Education, who has said repeatedly that we need Common Core tests to compare states.

The New York Times, America’s newspaper of record, has a story today about Massachusetts’ decision to abandon PARCC, even though its State Commissioner Mitchell Chrster is chairman of the board of PARCC. True or Memorex? Time will tell.

But the story has a serious problem: the opening sentence.

“It has been one of the most stubborn problems in education: With 50 states, 50 standards and 50 tests, how could anyone really know what American students were learning, or how well?”

Later the story has this sentence:

“The state’s rejection of that test sounded the bell on common assessments, signaling that the future will now look much like the past — with more tests, but almost no ability to compare the difference between one state and another.”

What happened to the National Assessment of Educational Progress? It has been comparing all the states and D.C., as well as many cities, since 1992. Has no one at the New York Times ever heard of NAEP?