Archives for category: NAEP

The federal test, the National Assessment of Education Progress–called “The Nation’s Report Card”–will begin testing “grit.” Grrr.

Politico.com reports:

QUESTIONING GRIT: How do issues such as “grit” – or perseverance – and school climate correlate to student test performance? Following next year’s National Assessment of Educational Progress, researchers hope to have more clues. The student questionnaire on NAEP in 2017 will include a series of questions that address what are known as “non-cognitive” skills. By asking several questions within the topic areas, acting NCES Commissioner Peggy Carr tells Morning Education they will get “multiple chances to get the best measurement possible.” Researchers will take the answers to create an index that’s compared with test scores to create a composite designed to reflect the relationship between factors such as grit and academic achievement. Students will also be asked about technology use and their socioeconomic status.

– A glance at questions piloted earlier this year suggests what researchers are looking for [http://1.usa.gov/296dnLY ], although how the questionnaire will look next year could change. One pilot question, for example, asked fourth graders if they have a “problem while working toward an important goal, how well can you keep working?” Another question asks how often they “felt left out of things at school?” The questionnaire typically takes students about 15 minutes to fill out, and is optional.

– These types of questions have long helped researchers and policymakers to better understand student learning, said Bill Bushaw, the executive director of the National Assessment Governing Board. He said no personally-identifying information is collected from the sample of students taking NAEP, and no questions are asked about personal beliefs or religion. “It really helps understand the differences in academic achievement and that’s really what this is about,” Bushaw said. For example, in the past, Bushaw said a question that asked students how many times they’d been absent in the last month led researchers to conclude that students who miss a lot of school have lower achievement. School districts have responded by focusing more on school attendance.

Just think. We will soon know which state has the grittiest students. Which race and ethnicity and gender needs more grit. Where the grit gap is. The possibilities for research are amazing!

Tom Loveless of the Brookings Institution has studied student achievement for many years. He has written several reports on the National Assessment of Educational Progress (NAEP). Before earning his doctorate, he taught sixth grade in California.

In this post, he explains why “reformers” who confuse NAEP’s “proficient level” with “grade level” are wrong.

This claim has been asserted by pundits like Campbell Brown of The 74, Michelle Rhee, and organizations such as Achieve. They want the public to believe that our public schools are failing miserably, and our kids are woefully dumb. But Loveless shows why they are wrong.

He writes:

Equating NAEP proficiency with grade level is bogus. Indeed, the validity of the achievement levels themselves is questionable. They immediately came under fire in reviews by the U.S. Government Accountability Office, the National Academy of Sciences, and the National Academy of Education.[1] The National Academy of Sciences report was particularly scathing, labeling NAEP’s achievement levels as “fundamentally flawed.”

Despite warnings of NAEP authorities and critical reviews from scholars, some commentators, typically from advocacy groups, continue to confound NAEP proficient with grade level. Organizations that support school reform, such as Achieve Inc. and Students First, prominently misuse the term on their websites. Achieve presses states to adopt cut points aligned with NAEP proficient as part of new Common Core-based accountability systems. Achieve argues that this will inform parents whether children “can do grade level work.” No, it will not. That claim is misleading.

The expectation that all students might one day reach 100% proficiency on the NAEP is completely unrealistic. It has not happened in any other country, including the highest performing. Not even our very top students taught by our very best teachers haven’t reached 100% proficiency. This is a myth that should be discarded.

Loveless goes even farther and insists that NAEP achievement levels should not be the benchmark for student progress.

He warns:

Confounding NAEP proficient with grade-level is uninformed. Designating NAEP proficient as the achievement benchmark for accountability systems is certainly not cautious use. If high school students are required to meet NAEP proficient to graduate from high school, large numbers will fail. If middle and elementary school students are forced to repeat grades because they fall short of a standard anchored to NAEP proficient, vast numbers will repeat grades.

Anyone who claims that NAEP proficient is the same as grade level should not be taken seriously. Loveless doesn’t point out that the designers of the Common Core tests decided to align their “passing mark” with NAEP proficient, which explains why 70% of students typically fails the PARCC and the SBAC tests. Bear in mind that the passing mark (the cut score) can be arbitrarily set anywhere–so that all the students “pass,” no students pass, or some set percentage will pass. That’s because the questions have been pre-tested, and test developers know their level of difficulty. And that is why U.S. Secretary of Education John King, when he was New York Commissioner of Education, predicted that only 30% of the students who took the state tests would “pass.” He was uncannily accurate because he already knew that the test was designed to “fail” 70%.

He concludes:

NAEP proficient is not synonymous with grade level. NAEP officials urge that proficient not be interpreted as reflecting grade level work. It is a standard set much higher than that. Scholarly panels have reviewed the NAEP achievement standards and found them flawed. The highest scoring nations of the world would appear to be mediocre or poor performers if judged by the NAEP proficient standard. Even large numbers of U.S. calculus students fall short.

As states consider building benchmarks for student performance into accountability systems, they should not use NAEP proficient—or any standard aligned with NAEP proficient—as a benchmark. It is an unreasonable expectation, one that ill serves America’s students, parents, and teachers–and the effort to improve America’s schools.

Mercedes Scheider followed the contretemps around Campbell Brown’s misuse of NAEP data. She decided she would try to educate Campbell.

 

Mercedes goes through NAEP scores over many years and shows how they went up nicely until 2015, when they stagnated at about 1/3 proficient. She patiently explains that until 2015, scores were going up, not declining as Campbell believes.

 

Now for Campbell to understand this, she is going to have to read it and think about it.

 

Will she?

 

Will she correct her error about 2/3 of US kids in 8th grade being “below grade level”? It is not true.

Carol Burris wrote a post for Valerie Strauss’s “Answer Sheet” blog at the Washington Post, in which she reported the numerous Twitter exchanges among herself, Tom Loveless, and Campbell Brown. Burris and Loveless fruitlessly tried to persuade Brown to retract her inaccurate statement that 2/3 of America’s eighth graders score below grade level.

 

Given an opportunity to respond by Valerie Strauss, Brown made an incomprehensible statement about how she should have referred to grade-level proficiency instead of grade level. Then everything would be okay. Instead of correcting her error, Brown insisted she was under personal attack.

 

Please read the last two sentences of her comments, which are hilarious. Especially the reference to “the age of Donald Trump and Diane Ravitch” (Sic)!! And then there is her laughable claim that those who disagree with her negative comments are profiting from school failure. I wish she–who received $4 million to start her website–would provide evidence for that statement!

 

 

The rheeform leadership has changed. Michelle Rhee was once the cover girl for test-and-punish reform, and now it is Campbell Brown. The telegenic Brown used to read the news on television but now she has taken Rhee’s place in the reformy firmament. Since she launched her career as an education expert with an op-ed attacking the teachers’ union in New York City for protecting sexual predators, Brown has become increasingly active in the world of education punditry. She received $4 million from various billionaires to launch a news site called “The 74,” which was supposed to refer to the number of school-age children in the United States. However, there are 50 million school-age children, but then why quibble? Brown organized candidate debates for both parties last fall. Three Republicans showed up, and no Democrats. Yesterday, she moderated a panel at the Harvard Graduate School of Education at a symposium on poverty and schooling.

 

Now Brown, having established her bona fides as an expert on education, has prepared a memo for the next president. 

 

Unfortunately her memo begins with a false statement. She starts by saying that 2/3 of American students in eighth grade are “below grade level” in reading and math. Apparently she refers to the National Assessment of Education Progress, the only national assessment of student skills. She confuses NAEP proficiency, a specific achievement level, with grade level.

 

To begin with, “grade level” is a median. Fifty percent are always above grade level, and fifty percent are always below.

 

But the NAEP achievement levels do not measure “grade level.” They are defined in the NAEP reports thus: “basic” represents partial mastery of skills; “proficiency” represents mastery; “advanced” represents extraordinary performance. “Below basic” is very poor performance.

 

Here are the definitions on the NAEP website:

 

 

Achievement Level Policy Definitions
Basic

 

Partial mastery of prerequisite knowledge and skills that are fundamental for proficient work at each grade.
Proficient

Solid academic performance for each grade assessed. Students reaching this level have demonstrated competency over challenging subject matter, including subject-matter knowledge, application of such knowledge to real-world situations, and analytical skills appropriate to the subject matter.
Advanced

 

Superior performance.

 

Here is a statement on the U.S. Department of Education (National Center for Education Statistics) website:

 

https://nces.ed.gov/nationsreportcard/studies/statemapping/faq.aspx

The statement, “Proficient is not synonymous with grade-level performance.”

 

The NAEP website says that the Governing Board thinks that the goal should be “proficient,” not “basic,” but the reality is that these achievement levels have been in place since 1992, and in no state or district has 100% of students ever achieved NAEP proficiency. In only one state, Massachusetts, has as much as 50% of students reached proficiency. If you believe, as Campbell Brown and the NAGB board does, that 100% of students should reach proficiency, then you believe that somewhere there is a baseball team that never loses a game, or an entire school district in which all children get grades of all As. It has never happened, not even in the wealthiest, most successful schools and districts. When elephants can fly, that is when “all” students will reach NAEP proficiency. Be it noted that the standard (the passing mark or cut score) for the Common Core tests is aligned with NAEP proficient, which is why 65-70% of students consistently “fail.”

 

I was a member of the National Assessment Governing Board for seven years. I read questions before they were administered to samples of students across the nation and in every state. The greatest number of students are scored as “basic,” which I consider to be the equivalent of a B or C. Those who register as “proficient” are the equivalent of an A performance. Advanced is for superstars. Typically, only 5-10% of students are “advanced.” About a third are proficient or advanced. The remaining 65% are basic or below basic. (These are my definitions, not the government’s or the NAGB board.)

 

To expect that most students will score the equivalent of an A is nonsensical.

 

Ms. Brown has been engaged in a Twitter debate with Tom Loveless of the Brookings Institution. Loveless, a real expert with years of teaching experience (elementary school) and a doctorate, has been studying and writing about NAEP and student performance for many years. He chastised Brown on Twitter for saying that 2/3 of students are “below grade level.” He encouraged her to check her facts, because he assumed that her journalistic background had taught her to do so. Carol Burris, director of the Network for Public Education, and veteran educator, jumped into the exchange.

 

Just yesterday, Brown responded with this comment:

 

Campbell Brown ‏@campbell_brown 
@carolburris @tomloveless99 this is why parents dont listen to u. U play semantics while 2/3 kids arent where they should be. I call BS

 

Since Brown thinks that NAEP proficiency is the same as “grade level,” she would profit by reading this report on the meaning of NAEP achievement levels. It gives a good overview of them and points out that they do not refer to grade levels. The report also usefully reviews the numerous critiques of the achievement levels, by experts who consider them “fundamentally flawed” and an inaccurate measure of student achievement.

 

I can only hope that Ms. Brown, education expert, gets a quick tutorial about what NAEP achievement levels are.

 

And I invite her to take the NAEP eighth-grade test, composed of released questions in reading and math, and release her scores. In a supervised setting, of course. I think she will be surprised. I will be interested to see if she is “proficient,” since she believes that anyone who is not proficient is a failure.

 

 

The recent release of the test scores of seniors on the National Assessment of Educational Progress reported that low-performing students suffered the biggest declines.

“Much like their 4th and 8th grade peers, high school seniors have lost ground in math over the last two years, according to the most recent scores on a national achievement test.

“In reading, 12th grade scores remained flat, continuing a trend since 2009.

“Perhaps the most striking detail in the test data, though, is that the lowest achievers showed large score drops in both math and reading. Between 2013 and 2015, students at or below the 10th percentile in reading went down an average of 6 points on the National Assessment for Educational Progress—the largest drop in a two-year period since 1994. The high achievers, on the other hand—those at or above the 90th percentile—did significantly better in reading, gaining two points, on average, while staying stagnant in math.”

John Thompson knows that reformers point to the District of Columbia as one of their examples of success. After all, the district has been controlled by Teach for America alumnae Michelle Rhee and Kaya Henderson since 2007. They own whatever successes and not-successes that occurred over the past eight years. The centerpiece of their claims of success is NAEP scores, which are up.

 

In this post, Thompson identifies the flaws in the narrative of success. Thompson lauds John Merrow for critiquing the narrative of a district he once held up as an exemplar of successful reform. Merrow asked, in his post, why anyone was celebrating Kaya Henderson’s five-year anniversary in the wake of the disastrous scores on the Common Core PARCC tests, which showed a district where academic performance was dismal.

 

Thompson reviews the NAEP scores, using Rick Hess’s data.

 

Hess cites overall gains in NAEP growth under Rhee and Henderson, but those same NAEP studies actually support the common sense conclusion that the numbers reflect gentrification. Hess’s charts show that from 2005 to 2013, the percentage of D.C. students who are low-income dropped from 66% to 61.6%. (In my world, a 61.6% low-income urban school seems danged-near rich.) Per student spending increased by 40% during that time. (The new spending, alone, comes close to the total per student spending in my 90% low-income system.)

 

According to Hess’s chart, the percentage of the D.C. students who are black dropped by 1/8th from 2005 to 2013, and the percentage of students with disabilities dropped by 1/7th. And, the 2015 NAEP excluded as many as 44% of D.C.’s English Language Learners. The conservative reformer RiShawn Biddle calls that exclusion “massive and unacceptable test-cheating.”

 

Even so, as Merrow reminds us, the performance gap between low-income and more affluent students has grown even wider; for instance, from 2002 to 2015, the 8th grade reading performance gap grew from 17 to 48 points.

 

Before Rhee/Henderson, the growth in D.C. test scores was spread much more widely. Because I believe that 8th grade reading is the most important NAEP metric in terms of evaluating school performance, I will cite some of those metrics in support of Merrow. From 1998 to 2002, black 8th grade reading scores increased from an average of 233 to 238. By 2015, they were down to 236. From 1998 to 2002, average 8th grade reading scores for low-income students increased from 229 to 233. In 2015, they remained at 233.

 

Thompson says it is sad that the elites now re-engineering public education are utterly disconnected from the lives and realities of the children who attend those schools or the people who teach in them. They need a reality check, or maybe a course in sociocultural sensitivity training so that they stop stepping on the faces of children and adults whose lives they know nothing about.

 

 

News flash! There is a national test that enables us to compare reading and math scores for every state! It is called NAEP. It reports scores by race, ELLs, poverty, gender, disability status, achievement gaps. This is apparently unknown to the Néw York Times and the Secretary of Education, who has said repeatedly that we need Common Core tests to compare states.

The New York Times, America’s newspaper of record, has a story today about Massachusetts’ decision to abandon PARCC, even though its State Commissioner Mitchell Chrster is chairman of the board of PARCC. True or Memorex? Time will tell.

But the story has a serious problem: the opening sentence.

“It has been one of the most stubborn problems in education: With 50 states, 50 standards and 50 tests, how could anyone really know what American students were learning, or how well?”

Later the story has this sentence:

“The state’s rejection of that test sounded the bell on common assessments, signaling that the future will now look much like the past — with more tests, but almost no ability to compare the difference between one state and another.”

What happened to the National Assessment of Educational Progress? It has been comparing all the states and D.C., as well as many cities, since 1992. Has no one at the New York Times ever heard of NAEP?

Dear Marc,

I read your latest post about NAEP scores in which you say you are taking a long view. You dismiss the disappointing results of the 2015 NAEP, which showed almost no gains and some declines. You choose instead to look at the Long-Term Trend NAEP, which has been asking identical questions since 1973. You point out that 17-year-old scores are flat since the early 1970s, which persuades you that we are in big trouble.

For readers, let me explain that there are two different versions of NAEP. The one that was recently reported is called “Main NAEP.” Its curriculum framework is updated every seven to 10 years, and the content changes. Main NAEP is offered in every state every other year, in reading and mathematics. It also periodically tests other subjects, such as history, civics, and science. It gives data for individual states for students in fourth and eighth grades, enabling anyone to compare performance from state to state. It also reports on achievement gaps among students who are white and black, white and Hispanic.

Then there is the “Long-Term Trend” NAEP. It is offered every four years. The reading LTT started in 1971, the math in 1973. Unlike Main NAEP, the content almost never changes, although items that are obsolete are deleted (the one deleted item I recall from my time on the governing board of NAEP was about S&H Green Stamps). It breaks out scores by race and gender.

Marc, you note the impressive progress made by students at ages 9 and 13, especially black and Hispanic students. But you then go on to say that at the current rate of improvement, 80% of students would not reach “proficient” for many decades, perhaps more than a century. I have to disagree with you here, because setting NAEP proficient as a goal is as unrealistic as the NCLB mandate that 100% of American students would be proficient by 2014. NAEP proficient is a very high standard; it represents a very high level of achievement. NAEP started measuring state performance in 1992, and 23 years later, Massachusetts is the only state in the nation where as many as 50% of students have reached NAEP proficient. Why set an impossible goal?

It is true that the scores for 17-year-old students have barely moved, but not for the reasons you cite. It is not that students get dumb as they reach senior year, but that they don’t give a hoot about a test that means nothing to them. When I was on the NAEP governing board, we devoted an entire meeting to discussing the problem of motivation for students at age 17 or senior year. Seniors doodled or made patterns on the answer sheets. They didn’t care what their score was because they knew the test didn’t matter. It didn’t affect their grades; it didn’t affect their college prospects. They would never find out how they did. For them, it was a meaningless exercise. The board considered ways to motivate them. Suppose we offered a pizza party to encourage students to care? Suppose we offered cash prizes? We could not agree on a solution to the problem of motivating high school seniors to take seriously a test that didn’t count.

And that is why I am not surprised or alarmed by the test scores of 17-year-old students on a test that they know doesn’t matter to them.

Carol Burris carefully reviewed the NAEP scores. Listen to her interview on public radio. Unlike many commentators, she has the advantage of being an experienced educator and is also executive director of the Network for Public Education.

Follow

Get every new post delivered to your Inbox.

Join 176,728 other followers