Yong Zhao writes here about the international test called PISA, which is used to rank, rate and stigmatize entire national systems of education. Its scores are based on a standardized test, of course, which contains the usual flaws of such tests.
His article is called “Two Decades of Havoc.”
Scholars have criticized PISA since it started, but once the global horse race started, there was no slowing it down. PISA now drives every nation to compete for higher scores, in a “race to the top” that very few can win. The critics have been ignored.
It is a stupid metric. Should we really long to be like Estonia? Can all of education be boiled down to questions on a test? What do the results tell us about the future? Nothing, really. When the first international test was given in mathematics in 1964, the U.S. came in last. And over the next 50 years, the U.S. economy surpassed the nations with higher math scores.
“Should we really long to be like Estonia?”
Yes.
Estonian Longing
I long to be Estonian
I love their stony ways
I love their sliced balloney an’
I love their stony bays
Estonia will win the race
To top of PISA pie
Estonia is just the place
To emulate (or buy)
When the first international test was given in mathematics in 1964, the U.S. came in last. And over the next 50 years, the U.S. economy surpassed the nations with higher math scores.”
Perhaps the PISA people are just looking at the whole thing backwards and countries should not be trying to do best on PISA but worst.
the larger irony: pre-NCLB, the USA was producing an amazing number of patents and China was looking at that fact, arguing that perhaps they should move away from standardized tests — just as the USA then decided to jump enthusiastically up onto the standardizing bandwagon
“It is a stupid metric.”
Channeling Sr. Swacker: It’s not even a metric. A metric requires a standard of measurement, which requires a standard unit of measurement. There is no standard unit of “learning” or “knowledge” or “intelligence”. Such things are inherently not measurable. It is, at best, a stupid assessment, but I couldn’t even tell you what it is allegedly assessing.
Of Metricks and pet tricks
It’s rhet’ric
And a pet trick
A metrick
Not a metric
“The Leaning Tower of PISA”
Rigor is de rigueur
Testing is the norm
PISA is the figure
The metrick of Deform
ROFLMAO!!! Great, SomeDAM!
Thanks. I too was about to have a get together to summon the spirit of the Swacker.
“It is a stupid metric.”
The PISA is a tool of economists, social scientists and other “bean counters.” It is designed to put schools, teachers and students on a competitive track. It inspires the churn and burn of the hamster wheel that ultimately leads down the road to absolutely nothing. There is no evidence that indicates that higher tests scores lead to a better future, economy or life.
Tool of Corporate Tools
PISA is the tool of Tools
Of e-conman for hire
A metrick for the public fools
Who listen to the liar
We wouldn’t all be talking about how great the Finnish education system is if it weren’t for PISA.
Just you wait
“Finland is finished”
Finland is finished
Encouraging “play”
Future’s diminished
On PISA they’ll pay
This is true.
Very true. There are self-contradictions in some arguments against testing.
Perhaps the biggest is using test scores (eg, on NAEP and PISA) to “prove” that deform “failed”.
I know, I know. The argument is “we are just using the deformer’s own “metric” (sic) for success and failure. “But the problem is that once one does this, one leaves the door open to claims that deform has succeeded if test scores go up in the future. One can not have it both ways.
And if one repeats the idea that the “test measures success” often enough, people start to believe it regardless of whether one is arguing for or against deform.
You will notice, SomeDAM, that I always say that Deform has failed BY ITS OWN PREFERRED MEASURE, test scores. And then I go on to say that the tests are invalid and to explain why.
The Finns don’t care about PISA.
What makes Finland interesting is that it gives no standardized tests.
Chicken Little mated with the boy who cried wolf, giving birth to PISA. I would liken PISA to some administrators I’ve encountered in some faculty meetings, and various countries to some students. The administrator says, “Hey, Finland and Singapore have high test scores and must be great students; everyone else in your class has lower test scores than them, and therefore must be failing.” I would reply that I don’t need test scores to know Finland is great. And the rest of the students aren’t failing. When I evaluate their essays, I find strengths that aren’t on the test. And besides, I don’t judge them by comparing them to one another on a grading curve. My students learn most when they discuss ideas with one another instead of competing against one another on a test.
The U.S. could learn a lot with Finland by discussing ideas. We never needed a test to know that. We should have been learning together all along. I think we’re probably all in agreement about that.
Perhaps I am mistaken, but I believe Leonie is merely highlighting the fact that many US politicians, think tank wankers and others in the US have held up Finland as an example to emulate precisely because they have done well on PISA.
Time for and international Opt Out?
This is a wonderful critique of the PISA tests and the unconscionable drive to standardize education on an international scale. The illusion of objectivity and the extravagant claims about the significance of these scores is aided and abetted by researchers in every participating country and by US economists such as Eric Hanushek who loves to leap from test scores to predictions about the fate of the economy.
Among many fallacies in these international tests is this: that paper and online versions of questions are equivalent and the translations of questions are valid.
I looked into one technical report explaining the procedures for arriving at “equivalent” translations. You do not need to read the whole report to see how this effort at bridging cultural differences is worthy of ridicule in aim and procedure.
“Chapter 5 Translation and Verification of the Survey Material INTRODUCTION This chapter explains the procedures used for translation, adaptation and verification for both paper-based (PBA) and computer-based (CBA) materials in PISA 2018. One of the important aspects of quality assurance in PISA is to ensure that the instruments used in all participating countries to assess students’ performance provide reliable and comparable information. In order to achieve this, strict procedures for the localisation (adaptation, translation and validation) of national versions of all survey instrumentation were implemented in PISA 2018 as in all previous PISA rounds.“ https://www.oecd.org/pisa/data/pisa2018technicalreport/PISA2018%20TecReport-Ch-05-Translation.pdf
What follows in that Chapter 5 Introduction is worthy of a Monty Python show or trying to explain the multi-tier caucus process for voting in Iowa.
Here is one PISA test question. Look at the assumptions made about content knowledge, cultural understandings, and what counts as trustworthy knowledge when the topic is health-related. Imagine how this question resonates in nations and cultures where there are food taboos. https://pisa2018-questions.oecd.org/platform/index.html?user=&domain=REA&unit=R557-CowsMilk&lang=eng-MAC
Thank you, Laura. I have long considered writing an expose of high-stakes standardized testing that would consist of analyses of the test-makers’ sample release questions. In these, they are doubtless putting their best foot forward, but the questions are invariably ridiculous, and in ELA, they certainly don’t measure what they purport to measure. Often, as written, a given question arguably has no best answer; or the answer that is supposed to be best isn’t, actually; or more than one answer is plausibly, arguably correct. The questions are TYPICALLY tortuous and extraordinarily sloppily written. The sloppiness of the tests has two etiologies:
Trying to test for proficient attainment of “standards” that are so broad and vaguely worded that it is literally IMPOSSIBLE to operationalize them sufficiently to test for concrete attainment VALIDLY
Trying to use a testing format, the multiple-choice question, to achieve a purpose for which it isn’t usually appropriate–testing higher-level thinking skills
I’ve done some such analysis of sample release questions for private circulation. I once sent a devastating analysis of one year’s FCAT ELA sample release questions to all my administrators and fellow English teachers. The analysis was pretty definitive.
What has held me back from doing this kind of thing at length and in book form is not having the resources to fight the inevitable lawsuits I would get from the testing companies for unauthorized use of their materials (even if these are publicly released). These people have a history of playing really, really nasty, and it doesn’t matter if their case has no merit. Defending against them can easily bankrupt an ordinary mortal.
No Test Left Behind? “Jeopardy”should be the test experts…Or “Trivial Pursuit” writers.