Search results for: "pisa"

When I learned that the latest PISA (Program on International Student Assessment had been released, I attended a webinar, where I learned once again that the scores of U.S. 15-year-old students were somewhat below the international average. The PISA tests in math, reading, and science have been offered since 2000, sponsored by the Organization for Economic Co-operation and Development.

My takeaway from the webinar was that we should try to be more like Singapore and Macau.

I have studied the results of international assessments such as PISA and TIMSS for years. Eventually, I began to wonder what the connection was—if any—between the test scores of 15-year-old students and the economic productivity of their nation 10, 15, 20 years later. We’ve been bemoaning our scores since the first international tests were given in the 1960s, even as our economy soars way beyond the nations with higher scores on the tests.

I invited Yong Zhao to share his reaction to the latest PISA scores. His response was as brilliant as I anticipated.

Yong Zhao is one our most accomplished scholars of education. Born in China to an impoverished family, he pursued his dreams, migrated to the United States, and has made his mark as a creative and innovative thinker. He is currently a Foundation Distinguished Professor of Education at the University of Kansas and holds an appointment as Professor of Educational Leadership at the University of Melbourne. His list of honors and publications is too long for me to recite here. But you can find it online.

Yong Zhao wrote:

It doesn’t make sense: Why Is the US Still Taking the PISA?

I have always wondered what America has got from participating the PISA every three years. Since 2000, the U.S. has been taking part in this nonsensical global academic horse race. Every time it took the test, American students stood at about the middle of the global league table. Every time the results were released, American media would point out how American students are not the best, but East Asian education systems such as China, Hong Kong, Chinese Taipei, South Korea, Japan, and Singapore are the best. And then U.S. authorities would invite PISA and other pundits to tell us how to improve American education.

The same story has been going on for more than two decades, but American education has not improved, at least according to the PISA scores. According to the most recent results (NCES, 2023), American students did much worse in math in 2022 than in 2003, with an 18-point decline from 483 to 465. Their reading and science scores, however, remained about the same without significant change over the past two decades. Although PISA experts largely blame the COVID pandemic as the reason for the decline in math, it does not make much sense because there is no decline in reading and science. Did COVID-19 only affect math, not science and reading? Of course, one can try to argue that reading and science are much less sensitive to COVID, but why? 

Basically, the international standing of the US and the test scores of its students have not changed much. Whatever the PISA data revealed and/or the lessons from other countries such as China, Japan, Singapore, or Finland have not helped improve America’s PISA scores. By the way, Finland, the country Americans view with the best education system because of its early stunning PISA performance, has seen a much more dramatic decline in its PISA scores: from 544 to 484—a 60-point decline in math, from 546 to 490—a 56-point decline in reading, and from 563 to 511—a 52-point decline over the past two decades. Not sure if America still views Finland as the best education country, but its scores have dropped to almost the same point as American students. 

In fact, other than Finland, the PISA league tables have not changed much either. East Asian education systems have consistently remained the top performers and the OECD countries’ average scores have been dropping. If PISA had any impact on the world’s education quality and equity, education should not be the same as 20 years ago.

PISA does not really have much to offer to anyone, except those who benefit from the test itself—the consultants, the test makers, the data processors, and possibly some education politicians.

In a review article (Zhao, 2020), I summarized the research about  PISA and found: 1) PISA markets itself as an assessment of abilities needed in the 21st Century, but it is the same as other international tests such as TIMSS, 2) PISA ignores the overall educational purposes of different countries by primarily assessing math, reading, and science, 3) PISA’s tests are not of high quality with numerous theoretical and technical problems, and 4) PISA’s sampling has been manipulated in different countries. My conclusion is that instead of bringing positive changes to the world, PISA wreaked havoc.

America has never excelled in international tests since the beginning of such assessment in the 1960s, but the low scores have not seemed to affect it much. In fact, a correlational analysis done in 2007 showed a negative correlation between international test scores and economic development (Baker, 2007). That is, countries with higher scores in the first international study did worse than countries with lower scores. If PISA or any other international tests truly measure what matters in education, America should no longer be a developed country. On the contrary, East Asian countries have always scored well in international assessments, but their economic development has been more related to economic, political, and international orders than their test scores.

What matters to economic development and prosperity is perhaps the non-cognitive factors that PISA does not typically emphasize. For example, in an analysis, I found that PISA scores are negatively correlated with entrepreneurship confidence across countries (Zhao, 2012b). American students, despite their lower scores, have always had more confidence than their peers in other countries. In fact, confidence has been found to have negative correlations with test scores (Zhao, 2012b, 2014, 2018b). High score education systems, except Finland, have always had a negative impact on students’ social and emotional wellbeing (Zhao, 2012a). Even PISA’s own data show that PISA scores are negatively correlated with life satisfaction of students (OECD, 2019).

Many education systems participate in PISA because they are fooled by its claim to measure global competitiveness. Somehow these educational systems are convinced that their PISA scores and rankings mean how competitive they are globally. But this is not true and cannot be true. In 2022, over 80 education systems took part in the PISA but these systems are hugely different. For example, the U.S. has three hundred million people and does not really have an education system (it has over 50 education systems based on the number of states and over 12,000 systems if we treat each school district as a system). How can it be compared with Macao, China, a tiny place with about 688,000 people and one education system? Likewise, how can the U.S., with a per capita GDP of over $70,000 be compared with Albania, whose per capita GDP is about $6,000.

Moreover, PISA has been operational for over 20 years. The first cohort of 15-year-old students took the test in 2000. If PISA truly has predictive power, it should have produced a longitudinal study to show how these students do in society. They are about 39 years old today. But we haven’t seen any such report except the wild guesses made by some scholars (Hanushek & Woessmann, 2010).

If PISA offers nothing, why does the U.S. spend the money and effort to join the game? For monitoring of basic education conditions, it already has the National Assessment of Education Progress (NAEP) or the national report card, which has been in existence since 1969. Why continue to participate in PISA?

Frankly, it’s inexplicable, for there is truly no reason the U.S. should continue to participate in PISA, let alone to pretend to learn from high performing countries. The lessons PISA offered have not been productive. For example, the lesson that high performing systems (e.g., Singapore, South Korea, and Finland) recruit high performing high school graduates to be teachers (Barber & Mourshed, 2007) is not based on real evidence and does not really produce better education outcomes (Gronqvist & Vlachos, 2008). The lesson that high performing systems have clear definitions of learning expectations, a good structure of different stages, and tough measures to ensure that students have met the expectations (Tucker, 2011) is intended largely to copy East Asian education systems; but, ironically, the East Asian countries have been working very hard to change these practices (Zhao, 2014). International learning may make sense sometimes, but there are great limitations (Zhao, 2018a). American education should focus on developing its own way to improve education instead of trying to catch up with others (Zhao, 2009)

This is not to say that American education is perfect. Rather, it is to say the way forward is not to look at what others have been doing. The U.S. needs to solve its own problems and work on creating a better future. With the emergence of ChatGPT and other generative AI tools, the world has changed again. If ChatGPT had taken the 2022 PISA, it is highly likely that it would outscore all the students in the world. It would be the best education system accordingly. Today, many students use AI tools to do their schoolwork, and teachers use AI in their teaching. PISA has become even more irrelevant.

Since 2000, our scores on PISA have barely changed. While there’s much chatter about learning from other systems, it has not happened. There is no reason that the U.S. should continue its participation in PISA.

References:

Baker, K. (2007). Are International Tests Worth Anything? Phi Delta Kappan, 89(2), 101-104. 

Barber, M., & Mourshed, M. (2007). How the World’s Best-Performing School Systems Come out on Top. Retrieved from New York: https://www.mckinsey.com/industries/social-sector/our-insights/how-the-worlds-best-performing-school-systems-come-out-on-top

Gronqvist, E., & Vlachos, J. (2008). One size fits all? The effects of teacher cognitive and non-cognitive abilities on student achievement. Retrieved from Stockholm, Sweden: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1311222

Hanushek, E. A., & Woessmann, L. (2010). The High Cost of Low Educational Performance: The Long-run Economic Impact of Improving PISA Outcomes. Retrieved from Paris: http://books.google.com/books?id=k7AGPo0NvfYC&pg=PA33&lpg=PA33&dq=hanushek+pisa+gdp&source=bl&ots=2gCfzF-f1_&sig=wwe0XLL5EblVWK9e7RJfb5MyhIU&hl=en&sa=X&ei=MLPCUqaOD8-JogS6v4C4Bw&ved=0CGcQ6AEwBjgK#v=onepage&q=hanushek%20pisa%20gdp&f=false

NCES. (2023). Program for International Student Assessment (PISA). Retrieved from https://nces.ed.gov/surveys/pisa/index.asp

OECD. (2019). PISA 2018 Results (Volume III): What School Life Means for Students’ Lives. Retrieved from https://doi.org/10.1787/acd78851-en.

Tucker, M. (Ed.) (2011). Surpassing Shanghai: An Agenda for American Education Built on the World’s Leading Systems. Boston: Harvard Education Press.

Zhao, Y. (2009). Catching Up or Leading the Way: American Education in the Age of Globalization. Alexandria, VA: ASCD.

Zhao, Y. (2012a, December 11). Numbers Can Lie: What TIMSS and PISA Truly Tell Us, if Anything?  Retrieved from http://zhaolearning.com/2012/12/11/numbers-can-lie-what-timss-and-pisa-truly-tell-us-if-anything/

Zhao, Y. (2012b). World Class Learners: Educating Creative and Entrepreneurial Students. Thousand Oaks, CA: Corwin.

Zhao, Y. (2014). Who’s Afraid of the Big Bad Dragon: Why China has the Best (and Worst) Education System in the World. San Francisco: Jossey-Bass.

Zhao, Y. (2018a). Shifting the Education Paradigm: Why International Borrowing Is No Longer Sufficient for Improving Education in China. ECNU Review of Education, 1(1), 76-106. 

Zhao, Y. (2018b). What Works May Hurt: Side Effects in Education. New York: Teachers College Press.

Zhao, Y. (2020). Two decades of havoc: A synthesis of criticism against PISA. Journal of Educational Change, 1-22. doi:10.1007/s10833-019-09367-x

The news media keep a set of stock headlines at the ready whenever national or international test scores are posted: SCORES DECLINE! U.S. STUDENTS FAILING! A SPUTNIK MOMENT! OUR SCHOOLS ARE FAILING!

All these cries of “failure” feed the phony narrative of the privatization movement. Organizations funded by rightwing billionaires promote the idea that students will get higher scores in charters or voucher schools (we now know that this claim is not true, that charter schools are no better (and often worse) than public schools, and that vouchers subsidize wealthy families and do not save poor kids.

It is a fact that U.S. students have never performed well on international tests, as I explained in my book REIGN OF ERROR. Since the 1960s, when the first international tests were administered, our scores on these tests were mediocre to awful. Nonetheless, our economy has outperformed nations whose students got higher scores decades ago.

Now for the good news.

The latest international test scores were released a few days ago, and scores went down everywhere due to the pandemic. David Wallace-Wells, an opinion writer for The New York Times, reported that even with dropping scores, U.S. students outperformed the rest of the world!

He writes:

By now, you’ve probably registered the alarm that pandemic learning loss has produced a “lost generation” of American students.

This self-lacerating story has formed the heart of an indictment of American school policies during the pandemic, increasingly cited by critics of the country’s mitigation policies as the clearest example of pandemic overreach.

But we keep getting more data about American student performance over the last few years, and the top lines suggest a pretty modest setback, even compared to how well the country’s students performed, in recent years, in the absence of any pandemic disruption.

Now, for the first time, we have good international data and can compare American students’ performance with students’ in peer countries that, in many cases, made different choices about whether and when to close schools and whether and when to open them.

This data comes from the Program for International Student Assessment, coordinated by the Organization for Economic Cooperation and Development in almost 80 countries typically every three years — a long-running, unimpeachable, nearly global standardized test measure of student achievement among the world’s 15-year-olds in math, reading and science.

And what it shows is quite eye-opening. American students improved their standing among their international peers in all three areas during the pandemic, the data says. Some countries did better than the United States, and the American results do show some areas of concern. But U.S. school policies do not seem to have pushed American kids into their own academic black hole. In fact, Americans did better in relation to their peers in the aftermath of school closures than they did before the pandemic.

The performance looks even stronger once you get into the weeds a bit. In reading, the average U.S. score dropped just one point from 505 in 2018 to just 504 in 2022. Across the rest of the O.E.C.D., the average loss was 11 times as large. In Germany, which looked early in the pandemic to have mounted an enviable good-government response, the average reading score fell 18 points; in Britain, the country most often compared with the United States, it fell 10 points. In Iceland, which had, by many metrics, the best pandemic performance in Europe, it fell 38 points. In Sweden, the darling of mitigation skeptics, it fell 19 points.

In science, the United States lost three points, about the same decline as the O.E.C.D. average and still above the level Americans reached in 2016 and 2013. On the same test, German students lost 11 points, and British and Swedish students dropped five; performance by students in Iceland fell by 28 points.

In math, the United States had a more significant and worrying drop: 13 points. But across the other nations of the O.E.C.D., the average decline from 2018 to 2022 was still larger: 16 points. And in historical context, even the 13-point American drop is not that remarkable — just two points larger than the drop the country experienced between the 2012 and 2015 math tests, suggesting that longer-term trajectories in math may be more concerning than the short-term pandemic setback. Break the scores out to see the trajectories for higher-performing and lower-performing subgroups, and you can hardly see the impact of the pandemic at all.

Of course, the Program for International Student Assessment is just one test, with all the limitations of any standardized measure. It is not good news, in general, if the world is struggling academically. And none of this is an argument for American educational excellence or never-ending remote learning or a claim there was no impact from closures on American kids or a suggestion that the country’s schools should have stayed closed as long as they did.

It is simply a call to assess the legacy of those closures in the proper context: a pandemic that killed 25 million people globally and more than a million in the United States and brought more than a billion children around the world home from school in 2020. In the 18 months that followed, American schools were not choosing between universal closures and an experience entirely undisturbed by Covid-19. They were choosing different ways of navigating the pandemic landscape, as was every other school system in the world. A good first test of whether the country bungled school closures is probably whether peer countries, in general, did better. The test scores imply that they didn’t.

So why do we keep telling ourselves the self-lacerating story of our pandemic educational failure?

One reason could be that while some state-level testing data shows no correlation between school closures and learning loss, some analysis of district-level data has shown a closer correlation. But this suggests that learning loss is not a national problem but a narrower one, requiring a narrower response.

Another is that testing is blind to other markers of well-being. Chronic absenteeism, for instance, is up significantly since before the pandemic and may prove a far more lasting and concerning legacy of school closure than learning loss. And the American Academy of Pediatrics declared a national mental health emergency — language that has been echoed by the American Medical Association.

But while American teenagers have reported higher levels of emotional distress in several high-profile surveys, here, too, the details yield a subtler picture. In the first year of the pandemic, according to a study supported by the National Institute of Mental Health, 17 percent fewer American teens made mental-health visits to emergency rooms than in the year before; in the second year, they made nearly 7 percent more. According to the Centers for Disease Control and Prevention, the proportion of teenage girls reporting persistent feelings of hopelessness and sadness rose from 47 percent in 2019 to 57 percent in 2021 — a concerning rise, though only slightly larger than the six-point increase from 2017 to 2019. The number of male teens reporting the same barely grew, from 27 percent to 29 percent, having risen much faster from 2017 to 2019.

Each of these data points should probably be understood in the context of mental health surveys of older Americans, such as the General Social Survey, which found that the percentage of American adults describing themselves as “very happy” fell from 31 percent in 2018 to 19 percent in 2021 and those describing themselves as “not too happy” nearly doubled to 24 percent. It is hard to disentangle the effects of school closure here from the experience of simply living through an anxious and disruptive time. To judge by the bleakest standard, youth suicide declined during the period of school closure and returned to prepandemic levels only after schools reopened.

Overall, American adults lost some confidence in the country’s school system in those years, with national approval dropping from 50 percent to 42 percent. But the drop is not from current parents of kids in school, whose approval rose throughout the pandemic, according to Gallup, from 72 percent in 2020 to 73 percent in 2021 to 80 percent in 2022. (Other recent surveys, including ones from Pew and The Times, have found similar postpandemic parental approval, between 77 percent and 90 percent.) Instead, as Matt Barnum suggested on ChalkBeat, the decline has been driven by the perspective of people without kids in those schools today — by childless adults and those who’ve opted out of the public school system for a variety of personal and ideological reasons. [Ed.: bold added]

Could we have done better? Surely. We might have done more to open all American schools in the fall of 2020 and to make doing so safe enough — through frequent pooled and rapid testing, more outdoor learning and better indoor ventilation, among other measures — to reassure parents, 71 percent of whom said that summer that in-person school was a large or moderate risk to their children and a majority of whom said that schools should remain closed until there was no Covid risk at all. We could have provided more educational and emotional support through the darkest troughs of the pandemic and probably been clearer, throughout the pandemic, that the risk of serious illness to individual kids was relatively low.

But we could do better now, too, by sidestepping pandemic blame games that require us both to exaggerate the effect of school closures on educational achievement and the degree to which policymakers, rather than the pandemic, were responsible.

Yong Zhao writes here about the international test called PISA, which is used to rank, rate and stigmatize entire national systems of education. Its scores are based on a standardized test, of course, which contains the usual flaws of such tests.

His article is called “Two Decades of Havoc.”

Scholars have criticized PISA since it started, but once the global horse race started, there was no slowing it down. PISA now drives every nation to compete for higher scores, in a “race to the top” that very few can win. The critics have been ignored.

It is a stupid metric. Should we really long to be like Estonia? Can all of education be boiled down to questions on a test? What do the results tell us about the future? Nothing, really. When the first international test was given in mathematics in 1964, the U.S. came in last. And over the next 50 years, the U.S. economy surpassed the nations with higher math scores.

 

Tom Loveless has been writing about international assessments for many years. He was quick to blow the whistle on China when the previous international test scores came out, noting that unlike the U.S. and most other nations, China was not testing a cross-section of its students.

In this article on Valerie Strauss’s Answer Sheet blog, Loveless calls out China again for rigging the outcomes to make its students #1.

China’s gains on the tests from 2015 to 2018 were so large as to be incredible, literally not credible.

So the typical change in a nation’s scores is about 10 points. The differences between the 2015 and 2018 Chinese participants are at least six times that amount. The differences are also at least seven times the standard deviation of all interval changes. Highly unusual…

The past PISA scores of Chinese provinces have been called into question (by me and others) because of the culling effect of hukou on the population of 15-year-olds — and for the OECD allowing China to approve which provinces can be tested. In 2009, PISA tests were administered in 12 Chinese provinces, including several rural areas, but only scores from Shanghai were released.


Three years later, the BBC reported, “The Chinese government has so far not allowed the OECD to publish the actual data.” To this day, the data have not been released.
The OECD responded to past criticism by attacking critics and conducting data reviews behind closed doors. A cloud hangs over PISA scores from Chinese provinces. I urge the OECD to release, as soon as possible, the results of any quality checks of 2018 data that have been conducted, along with scores, disaggregated by province, from both the 2015 and 2018 participants.

The OECD allows China to hide data and game the system. This lack of transparency should not stand.

Alan Singer calls out Common Core for the poor showing of US students on PISA. 

Remember all the promises about how Common Core would raise all test scores and close gaps? Nada.

Of course, the deeper issue is that decades of test-and-punish reforms failed, not just Common Core.

it those who pushed these failed policies will not abandon them. They will say—they are saying—that we must double down on failure.

The consensus among governors and policy elites that followed “A Nation at Risk” in 1983 was that common standards, tests, and accountability would lead to high levels of performance (ie, test scores).

They didn’t. They haven’t. They won’t.

Almost four decades later, we can safely say that this theory of reform has failed. Billions of dollars wasted!

Our blog poet reflects on the meaning of the latest international test scores (PISA).

That’s a Moron

When the test hits your eye
Like an old PISA pie
That’s a moron

When the scores make you drool just like a pasta fazool
You’re a moron
When you dance down the street with a test as your beat
You’re insane
When you walk in a dream but you know you’re not dreaming signore
Scuzza me, but you see, back in old Napoli
You’re a moron
A moron, that’s a moron

https://youtu.be/1TWhFmCRdoU

Peter Greene writes regularly for Forbes, where this article appeared.

He explains for the  umpteenth time (as I have done repeatedly) that the U.S. has never led the world on international tests, whether it was PISA, TIMSS, IEA, or any other.

He writes:

The top scores this year come from the usual batch of test takers,including the Chinese, who give the test to students from wealthy provinces.. PISA day is also the one day that some folks hear about Estonia, the tiny nation that somehow has not conquered the world even though their students do well on the PISA.

PISA coverage tends to overlook one major question—why should anyone care about these scores? Where is the research showing a connection between PISA scores and a nation’s economic, political, or global success? What is the conclusion to the statement, “Because they get high PISA scores, the citizens of [insert nation here] enjoy exceptionally good______” ?

Did US companies outsource work to India and China because of their citizens’ PISA scores, or because of low wages and loose regulation? Do we have the world’s most expensive health care system because of mediocre PISA scores? Which politicians have ridden to success on the PISA score platform pony? Are any geopolitical conflicts solved by whipping out the contending countries’ PISA scores for comparison? And is there a shred of evidence that raising PISA scores would improve life for US citizens (spoiler alert: no)?…

There will be discussions of what the PISA scores do or do not prove. Some of that is fair; Common Core and other ed reforms pushed by billionaires and thinky tanks and politicians and a variety of other non-educators were going to turn this all around. They haven’t. This comes as zero surprise to actual educators. It’s just one more data point showing that all the reform heaped on education since A Nation At Risk is not producing the promised results.

Remember when Arne Duncan promoted the “Race to the Top”? Remember when David Coleman and Bill Gates pledged that Common Core would close achievement gaps and raise the lowest-performing students closer to the top-performers? There comes a time when people must be held accountable for their promises.

 

 

Yong Zhao, the brilliant education analyst, writes here about the great PISA illusion. If you have not read any of Zhao’s books, do so now. If you have not heard him speak, google him or invite him to your next big conference. He is insightful, provocative, thoughtful, absolutely delightful! He is a master at making people think and debunking hoaxes.  Please read the entire post to learn how we and the rest of the world have been hoaxed by promoters of fake ideas.

He writes:

PISA is a masterful magician. It has successfully created an illusion of education quality and marketed it to the world. In 2018, 79 countries took part in this magic show out of the belief that this triennial test accurately measures the quality of their education systems, the effectiveness of their teachers, the ability of their students, and the future prosperity of their society.

PISA’s magical power in the education universe stems from its bold claims and successful marketing. It starts by tapping into the universal anxiety about the future. Humans are naturally concerned about the future and have a strong desire to know if tomorrow is better than, or at least as good as, today. Parents want to know if their children will have a good life; politicians want to know if their nations have the people to build a more prosperous economy; the public wants to know if the young will become successful and contributing members of the society.

PISA brilliantly exploits the anxiety and desire of parents, politicians, and the public with three questions (OECD, 1999, p. 7):

  • How well are young adults prepared to meet the challenges of the future?
  • Are they able to analyse, reason and communicate their ideas effectively?
  • Do they have the capacity to continue learning throughout life?

These words begin the document that introduced PISA to the world in 1999 and have been repeated in virtually all PISA reports ever since. The document then states the obvious: “Parents, students, the public and those who run education systems need to know” (OECD, 1999, p. 7). And as can be expected, PISA offers itself as the fortuneteller by claiming that:

PISA assesses the extent to which 15-year-old students, near the end of their compulsory education, have acquired key knowledge and skills that are essential for full participation in modern societies. … The assessment does not just ascertain whether students can reproduce knowledge; it also examines how well students can extrapolate from what they have learned and can apply that knowledge in unfamiliar settings, both in and outside of school. This approach reflects the fact that modern economies reward individuals not for what they know, but for what they can do with what they know. (OECD, 2016, p. 25).

This claim not only offers PISA as a tool to sooth anxiety but also, and perhaps more importantly, makes it the tool for such purpose because it helps to knock out its competitors. As an international education assessment, PISA came late. Prior to PISA, the International Association for the Evaluation of Educational Achievement (IEA) had already been operating international assessments since the 1960s, offering influential programs such as TIMSS and PIRLS. For a start-up to beat the establishment, it must offer something different and better. That’s exactly what PISA promised: a different and better assessment…

However, the claim, the foundation upon which PISA has built its success, has been seriously challenged. First, there is no evidence to justify, let alone prove, the claim that PISA indeed measures skills that are essential for life in modern economies. Second, the claim is an imposition of a monolithic and West-centric view of societies on the rest of the world. Third, the claim distorts the purpose of education.

Made-up Claim

The claim that PISA measures knowledge and skills essential for the modern society or the future world is not based on any empirical evidence. Professor Stefan Hopmann of the University of Vienna writes:

There is no research available that proves this assertion beyond the point that knowing something is always good and knowing more is better. There is not even research showing that PISA covers enough to be representative of the school subjects involved or the general knowledge-base. PISA items are based on the practical reasoning of its researchers and on pre-tests of what works in most or all settings — and not on systematic research on current or future knowledge structures and needs. (Hopmann, 2008, p. 438).

In other words, the claim was just a fantasy, an illusion, entirely made up by the PISA team. But PISA keeps repeating its assertion that measures skills needed for the future. The strategy worked. PISA successfully convinced people through repetition…

Although PISA claims that it does not assess according to national curricula or school knowledge, its results have been interpreted as a valid measure of the quality of educational systems. But the view of education promoted by PISA is a distorted and extremely narrow one (Berliner, 2011; Sjøberg, 2015; Uljens, 2007). PISA treats economic growth and competitiveness as the sole purpose of education. Thus it only assesses subjects — reading, math, science, financial literacy, and problem solving — that are generally viewed as important for boosting competitiveness in the global economy driven by science and technology. PISA shows little interest in other subjects that have occupied the curricula of many countries such as the humanities, arts and music, physical education, social sciences, world languages, history, and geography (Sjøberg, 2015).

While preparing children for economic participation is certainly part of the responsibility of educational institutions, it cannot and should not be the only responsibility (Labaree, 1997; Sjøberg, 2015; Zhao, 2014, 2016). The purpose of education in many countries includes a lot more than preparing economic beings. Citizenship, solidarity, equity, curiosity and engagement, compassion, empathy, curiosity, cultural values, physical and mental health, and many others are some of the frequently mentioned purposes in national education goal states. But these aspects of purpose of education “are often forgotten or ignored when discussions about the quality of the school is based on PISA scores and rankings” (Sjøberg, 2015, p. 113).

Zhao presents a devastating critique of the validity of PISA. It is a must read.

Politico Morning Education reports:

 

U.S. SCORES IN READING, MATHEMATICS AND SCIENCE LITERACY REMAINED ESSENTIALLY FLAT FROM 2015 in the latest Program for International Student Assessment results, but U.S. rankings improved because other education systems worsened.

— The 2018 PISA results showed U.S. average scores in reading and science literacy were higher than the average of about three dozen mostly industrialized countries making up the Organization for Economic Cooperation and Development, which develops and coordinates the assessment. But U.S. average math scores were lower than the OECD average.

— PISA, an international assessment administered every three years, measures 15-year-old students’ literacy in the three disciplines and is designed to provide a global view of U.S. students’ performance compared to their peers in nearly 80 education systems.

— “If I communicated nothing, I hope I communicated that we are struggling in math in comparison to our competitors around the world,” Peggy G. Carr, the associate commissioner of assessments for the National Center for Education Statistics, told reporters in a call before the results were released. Nicole Gaudiano has more.

 

The PISA results were released, and they put the test-and-punish reforms of the past two decades in a harsh light. Billions have been spent on testing and spurious teacher evaluations.

Dana Goldstein writes in the New York Times:

The performance of American teenagers in reading and math has been stagnant since 2000, according to the latest results of a rigorous international exam, despite a decades-long effort to raise standards and help students compete with peers across the globe.

And the achievement gap in reading between high and low performers is widening. Although the top quarter of American students have improved their performance on the exam since 2012, the bottom 10th percentile lost ground, according to an analysis by the National Center for Education Statistics, a federal agency.

If you recall, the Disrupters claimed that their method would both “Race to the Top” and “close achievement gaps.”

Their strategies did neither. Time for a change.