The Thomas B. Fordham Institute commissioned and published an evaluation of the “content and quality of the next generation assessments,” specifically, the Common Core assessments (PARCC and SBAC), as well as ACT Aspire, and the Massachusetts Comprehensive Assessment System (MCAS). The report’s introduction was written by Michael Petrilli and Amber Northern of the Institute; the report itself was written by researchers Nancy Doorey and Morgan Polikoff. The introduction and link to the report appear in this post; the following posts will debate the study and its findings.
This link takes you to the introduction.
This link takes you to the full report.
The authors of the report concluded that the Common Core assessments (PARCC and SBAC) were superior to the ACT Aspire and the MCAS.
This is the central finding:
Here’s just a sampling of what we found:
Overall, PARCC and Smarter Balanced assessments had the strongest matches to the CCSSO Criteria.
ACT Aspire and MCAS both did well regarding the quality of their items and the depth of knowledge they assessed.
Still, panelists found that ACT Aspire and MCAS did not adequately assess—or may not assess at all—some of the priority content reflected in the Common Core standards in both ELA/Literacy and mathematics.
The report is long, but the meat of the report can be easily accessed. It is important that you wrap your mind around the report because the next post will challenge its findings.
Since many of you are teachers and have administered some of these tests, feel free to add your voice to the debate.
Who paid for this? It reads like a “Let’s save David Coleman’s career” document.
“Overall, PARCC and Smarter Balanced assessments had the strongest matches to the CCSSO Criteria.”
Who said the best the criteria was the CCSSO criteria?? MCAS matches the Mass Frameworks criteria which was developed by teachers and is more humane and deep. If this was only to examine if the CCSSO criteria matched a test, then the results are limited. The question Fordham should have asked is not which tests matched the CCSSO criteria, it should have been what standards are the best. And the question regarding MCAS and Massachusetts should have been to colleges and look at rates of college entrance and graduation…or are the Mass kids college and career ready? This is rigged.
Perhaps we should point out there is no validity to ANY report done by Thomas B. Fordham. They have been biased toward a pro-testing agenda (on behalf of their funders) , and untruthful in their reporting on Education for too long to grant them any credibility. No ethos, no logos, and pi** poor pathos.
Which test is best?
a. PARCC b. Aspire c. Smarter Balanced d. Other e. None of the Above
Correction answer: none of the above–
Honestly Diane, I feel they are just shills for all things ed reformy. If they support it, they study and measure it and find it the most fabulous thing ever! States have dropped the PARCC like it is contagious, so they prop it up. Only 6 plus DC left in the consortium. Wonder if Gates gave them a chunk of $$? (I’m less familiar with SBAC.) Here in NJ, Gates money is flowing as Chamber of Commerce, PTA and others launch a PR campaign about how wonderful PARCC is after 135,000 families opted out last year. This latest petrellischeme sounds more like a save -the -tests -to-prop-up -someone’s- business-plan- study than anything else.
So let me understand this. Tests based directly on Common Core have a higher correlation with Common Core objectives than other tests?
Wish _ I’d_ been paid the big bucks for that study
Russell, I agree. I’ve taught in MA for ten years and have experience with both the MCAS and parcc and the goal of the report sounds like a fool’s errand. Whenever smart people do stuff like this I always assumed they knew something I did not. But after years of experience dealing with situations where the emporor truley had no clothes, I finally feel confident in saying that the premise of this study does sound amazingly stupid.
I’d love to discuss the lack of quality of the SBACs, but I have not seen or heard about any of the questions. I spent the entire testing session approving password logins and troubleshooting password and iPad glitches. Some of my (highest “performing”) students were booted out of the system mid-question, and were unable to log back in until four to six weeks later.
Any chance the one million dollars the Fordham Institute received from Bill and Melinda Gates could have influenced the report?
The MCAS is a test written using the “old” MA state standards. Why would it mirror the Common Core standards? I bet the MCAS had a much stronger matches to the MA state standards than PARRC or SBAC tests.
This report is not just a set-up, it is based on a premise so stupid that it defies all rational thought. Pure Petrelli drivel.
If science and math were built by such shaky methods, we’d still fear eclipses and be counting on fingers. Really? Basing a “study” and conclusions on something non-conclusive like Common Core? Just because Fordham has a great Very Serious Sounding name does not give them a pass or credibility on these pseudo-studies.
We live in a post-scientific era where sound bites and headlines trump the truth. Fordham’s nonsensical conclusion will still be a plus for their reform narrative.
We insist on fighting their BS claims and false narratives using complex explanations rooted in research and experience. They don’t care because their PR beats ours every day.
They want people to believe in silver bullets: standards/tests/charters/computers.
However, the one universal truth about teaching and learning is that there is no universal truth about teaching and learning. Now that’s a bad sound bite; nobody wants to hear the fact that there is no easy answer that works for all kids in all schools in all communities. Much better to sell them what they want.
Here in Ohio, the far right Republican legislature introduced yet another form of “religious freedom” bills. In this one, students can proselytize their brand of evangelical Christianity with the full support of the State. I wonder if a satanic temple or Church of the Blessed Soda Can were to show up, if these religious cliques would be so welcoming? The worst of it? An anti-intellectualism where science teachers must accept creationism as valid science on any student assignments. We seem to be moving backward in Ohio. Just remember, Kasich wants to be your president because he is “just like Jesus, only better”.
Which is stronger, the monkey or the elephant?
Since a monkey can climb a tree the monkey is stronger!
Wait, what?
Much ado about very little. I skimmed the whole report, but all one really needs to know is stated in the introduction:
“We deployed a brand-new methodology… that was… based on the Council of Chief State School Officers’ 2014 ‘Criteria for Procuring and Evaluating High-Quality Assessments.'”
If there is any argument at all presented by this study, it’s a circular one.
It is useful to keep in mind that not only PARCC and SBAC are aligned to or based on Common Core’s standards. ACT’s ASPIRE is also aligned to CC’s standards; however, it claims it does not assess all of CC’s standards, making it, in ACT’s view, not aligned to CC, although which CC standards it does not assess (and at what grades) is not clear on ACT’s website. MCAS 2014 is also based on CC standards, because the Board of Education voted in CC in 2010, and as of 2011, the MA Curriculum Framework in ELA and Math is mainly CC standards, plus a few in both subjects that were added by Department staff. How the 2014 MCAS differs from the 2014 PARCC has never been explained.
What is worthy of exploration (but not explored in this Fordham report) is why 2014 was chosen. It is quite clear from percentages for the performance levels from 2008 on, that grade 10 ELA and math tests had been steadily dumbed town. A February 2015 MBAE test first pointed that out publicly. To confirm, see the tables in the last chapter of this White Paper by Mark McQuillan, Richard Phelps, and Sandra Stotsky (http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/).
The fact that an October 2015 Mathematica study found both the 2014 MCAS and the 2014 PARCC about equal in predicting college readiness suggests that PARCC (and likely SBAC) and any other CC-aligned test expects much less in grade 10 than the original MCAS tests (1998-2007) did.
Fordham has received over $8million from Gates over last 6 +/- years.
The critical part of a test is not the questions it asks; it the quality and depth of the diagnostics it provides. We need to evaluate first, what type of report can the test generate before asking what types of questions the test asks. http://mathisconceptual.blogspot.com/2014/11/guide-for-evaluating-data-tools.html
Can you point us to the “diagnostic” information it provided any school, parent, or teacher in 2015?