Archives for category: Testing

The College Board has not released the syllabus for the AP African-American Studies course that the state of Florida wants to ban because, they say, it has “no educational value” and violates state law by invoking “critical race theory.”

But the syllabus was released by NBC News and is easily found on the internet.

And here is the syllabus.

I suggest that you read it for yourself.

Stanley Kurtz, a conservative academic, wrote a scathing critique in National Review, where he blasted the AP course as “Neo-Marxist” and intent on propagating a socialist-Marxist-Communist mindset. Google and you will find follow-up articles by Kurtz.

I taught the history of American education, and I wrote books that specifically included the history of the education of Black Americans. To write about the history, I read many of the authors cited in the AP course. None of those authors, like Frederick Douglass or Carter Woodson or W.E.B. DuBois or Booker T. Washington, should be excluded from a course like this.

I will say without hesitation that the course is not, as Florida officials claim, “leftwing indoctrination.” Very few Americans know anything about African history, so my guess is that 99% of that history will be new to every reader. I am not sure why DeSantis is upset by “intersectionality.” A reporter should ask him to define it. I saw no problem in the mention of the Black Lives Matter movement or the reparations movement, because they are part of history; they exist. Why ban them? The DeSantis team wants the AP course of study to be upbeat; to show the celebratory rightwing view of American history; to exclude authentic African American thinkers, like Kimberlé Crenshaw and Michelle Alexander.

True there is a topic on “Black Queer Studies” that must drive Ron DeSantis and his allies crazy. I doubt that any students will be turned gay by learning about the topic. But this topic alone will be sufficient to get the course banned in DeSantis’ state and probably other red states. It might get axed by the College Board, which is alert to its bottom line. If the pushback hurts revenue, the College Board is likely to beat a hasty retreat.

Kurtz is right on one count. He wrote that “A stunningly large portion of the APAAS curriculum is devoted to the history of black studies.” This is true. Students will learn a lot about the leading scholars of the field and their contributions. Much of the scholarship is about the scholarship. And much, rightly, is about the brutal exploitation and degradation of African peoples.

In discussions with students about their expectations for the course, students said there should be an “unflinching look at history and culture.” Of course. They don’t want a sanitized history. They also said “Emphasis should be placed on joy and accomplishments rather than trauma.” They felt that they had learned about slavery every year, and “students feel they have been inundated with trauma.” In this course, it’s hard to find the “joy and accomplishments” that students are hoping to learn about. It is unlikely that they will learn much about barrier-breaking individuals like Dr. Charles Drew; LBJ’s Housing Secretary Robert Weaver; Guy Bluford (the first Black astronaut) or Mae Jamison (the first Black female astronaut); Ralph Bunche (the first African American to win a Nobel Prize for his diplomacy); Leontyne Price, the great international opera star, born in Laurel, Mississippi, or the newest international opera star Michelle Bradley, born in Versailles, Kentucky; or even the first Black President, Barack Obama. Of the hundreds and thousands of African Americans who have achieved their dreams, not much is said. The students say they know a lot about Dr. King, Malcolm X, and Rosa Parks; they want more. And they should have the pleasure of learning the inspiring stories of African-Americans who shattered stereotypes and made history.

The College Board says this is a preliminary version of the ultimate AP exam. It’s a good start. Let’s see if it can survive the political maelstrom.

Jan Resseger is consistently the voice of wisdom on anything related to children and young people. In this post, she explains why we should not be panicked by the decline of NAEP scores. The scores reflected the toll that the pandemic exacted. But now that children are back in school, we can expect learning to proceed without major disruption.

She writes:

I think this year’s NAEP scores—considerably lower than pre-pandemic scores—should be understood as a marker that helps us define the magnitude of the disruption for our children during this time of COVID. The losses are academic, emotional, and social, and they all make learning harder….

Education Week’s Sarah Schwartz asked Stanford University professor Sean Reardon (whose research tracks the connection of poverty and race to educational achievement) whether “it will take another 20 years to raise scores once again.” Reardon responded: “That’s the wrong question…. The question is: What’s going to happen for these (9-year-old) kids over the next years of their lives.” Schwartz describes more of Reardon’s response: “Children born now will, hopefully, attend school without the kinds of major, national disruptions that children who were in school during the pandemic faced. Most likely, scores for 9-year-olds, will be back to normal relatively soon, Reardon said. Instead, he said, we should look to future scores for 13-year-olds, which will present a better sense of how much ground these current students have gained.”

Every major newspaper carried a story this morning about the sharp decline in NAEP scores because of the pandemic.

The moral of the story is that students need to have human contact with a teacher and classmates to learn best. Virtual learning is a fourth-rate substitute for a real teacher and interaction with peers.

Tech companies have told us for years that we should reinvent education by replacing teachers with computers. We now know: Virtual learning is a disaster.

The crisis we should worry about most is the loss of experienced teachers, who quit because of poor working conditions, low pay, and attacks by “reformers” who blame teachers at every opportunity.

The pandemic isolated children from their teachers. It caused them to be stuck in front of a computer. They were bored.

They needed human interaction. They needed to look into the eyes of a teacher who encouraged them to do better, a teacher who explained what they didn’t understand.

The NAEP scores are a wake-up call. We must treasure our teachers and recognize the vital role they play in educating the next generation.

Any politician who disrespects teachers by calling them “pedophiles” and “groomers” should be voted out of office.

Every “reformer” who disparages teachers should be required to teach for one month, under close supervision, of course.

Experienced teacher Nancy Bailey opposes Michael Petrilli’s proposal to give NAEP tests to kindergartners. Petrilli, who is president of the conservative Thomas B. Fordham Institute made this proposal in Education Next.

Petrilli recognizes that the typical 5-year-old can’t read and probably can’t hold a pencil but thinks there is value in online visual tests. He argues that it’s a mistake to delay NAEP until 4th grade, because policymakers are “left in the dark” about what children know by age 5.

He writes:

Grades K–3 are arguably the most critical years of a child’s education, given what we know about the importance of early-childhood development and early elementary-school experiences. This is when children are building the foundational skills they’ll need in the years ahead. One report found that kids who don’t read on grade level by 3rd grade are four times more likely to drop out of high school later on. Why do we wait until after the most important instructional and developmental years to find out how students are faring?

Petrilli assumes that knowing test scores leads to solutions. I question that. We have been testing random samples of 4th and 8th graders (and sometimes seniors) since the early 1970s, and the information about test scores has not pointed to any solutions. After 50 years, we should know what needs to be done. We don’t, or at best, we disagree. Since 2010, test scores have been stubbornly flat. Does this mean that the Common Core and Race to the Top failed? Depends on whom you ask. It’s hard for me to see what educational purpose would be served by testing a random sample of kindergartners online.

Bailey doesn’t see what the purpose is. She points out that Petrilli was never a teacher of young children. He never was a teacher, period. He is an author and a think tank leader who champions conservative causes.

She writes:

The National Assessment of Educational Progress (NAEP) randomly assesses students across the country in math and reading in grades 4 and 8, and in civics and U.S. History in grade 8 and Long-Term Trend for age 9, but it doesn’t test kindergartners. Why should it? Why is the testing of kindergartners necessary? The answer is it isn’t.

Suppose we learn that 52% of kindergartners recognize the color red. Suppose we learn that 38% recognize a square. Suppose we learn that 63% recognize an elephant. So what? Why does any of this matter?

Bailey writes:

The best assessment of this age group is accomplished through observation, by well-prepared early childhood educators who understand the appropriate development of children this age, who can collect observational data through notes and checklists as children play and socialize with their peers.

Who needs the information that might be collected about a random sample of kindergarten children? What would they do with it?

It’s a puzzlement.

I recently posted a long article by Michael Fullan that proposed a new paradigm for education reform. I found Fullan’s dismissal of the status quo persuasive, as well as his description of a forward-looking approach.

Laura Chapman, inveterate researcher and loyal reader, reviewed Fullan’s recent work and was disappointed with what she found:

If ever any paper needed close reading this is it, especially Fullan’s discussion of the 6C’s, 21st Century Skills, and vague references to some ancillary research in California and Australia.

I am working on learning more about at least one of Fullan’s California projects. Unfortunately there are no peer-reviewed summary of accomplishments.

Here is a link if you also want to see what assessment looked like in one Fullan project, a three-year $10 million effort to improve the performance of English Learners including long-term English Learners, funded by the California School Boards Association and several non-profits.  https://michaelfullan.ca/wp-content/uploads/2017/11/The-Coherence-Framework-in-Action.pdf

You will see that the main measures of accomplishment are expressed as percentages, and that these percentages changed over the three-year project.

100% of Long-Term English Learners will access new curriculum supported with adequate technology, instructional materials, and assessments.

5% annual increase in English Learner language proficiency.

3% annual increase in English Learner A-G completion. (A-G refers to courses required for admission to either the California State University or University of California systems with a grade of C or better).

50% increase in Long-Term English Learner students reporting they feel positively connected to the school environment and experience success.

Year-to-year changes in these percentages appear to be framed as if evidence for continuous improvement.

This brief suggests that more detail can be found in specific pages of Fullan’s 2016 book: The Taking Action Guide to Building Coherence in Schools, Districts, and Systems. You have to buy or borrow the book to see the details.

Although some of the Fullan’s paper is appealing, it also represents another proposal for managing learning as if there are no redeeming features in our public schools and the principle of democratic governance for these.

It is worth noting that Joanne Quinn, a frequent collaborator with Fullan, has an MBA in Marketing and Human Resource Management. According to LinkedIn for 16 years she has been President of Quinn Consultants in Toronto. She also served for ten years as the Superintendent of Education for four schools in a district with 65,000 students.

Fullan is think-big thinker: “This paper is intended to provide a comprehensive solution to what ails the current public school system and its place in societal development – a system that is failing badly in the face of ever complex fundamental challenges to our survival, let alone our thriving as a species.”

I am uncomfortable with anyone who claims to have a “comprehensive solution” to the current public school system (including the USA) and who fails to address the fiscal and policy constraints that have been imposed on that system for decades along with a pattern of denial that planet earth and human survival is at risk.

If you want a better and brief jargon-free article on doable reforms, find “Twenty Years of Failing Schools” in The Progressive, February/March issue (pages 50-51. This article includes specific suggestions for the Biden administration and the new Secretary of Education. The author is Diane Ravitch.

Apparently, the voucher schools were embarrassed by the Ohio study showing that kids who use vouchers lose ground academically.

There were two ways to respond to that finding: 1) improve instruction in the voucher schools by requiring them to hire certified teachers; 2) obscure the data.

The voucher lobby chose the second route.

The Republican-dominated legislature is now vastly expanding the state’s failing voucher program. But a few years ago, it decided that voucher schools would no longer be required to give the same exams that students in public schools are required to take. The conservative Thomas B. Fordham Institute worried about the change, because it makes it difficult, if not impossible, to draw comparisons between students in public schools and their peers in private and religious schools.

That’s the goal.

Many other states that offer vouchers allow those schools not to take the state exams. Some, like Florida, expect no accountability from voucher schools. Others ask those schools to administer an “equivalent” standardized test, which makes it impossible to compare voucher schools to public schools.

Les Perelman, former professor of writing at MIT and inventor of the BABEL generator, has repeatedly exposed the quackery in computer-scoring of essays. If you want to learn how to generate an essay that will win a high score but make no sense, google the “BABEL Generator,” which was developed by Perelman and his students at MIT to fool the robocomputer graders. He explains here, in an original piece published nowhere else, why the American public needs an FDA for assessments, to judge their quality.

He writes:

An FDA for Educational Assessment, particularly for Computer Assessments

As a new and much saner administration takes over the US Department of Education led by Secretary of Education, Miguel Cardona, it is a good time, especially regarding assessment, to ask Juvenal’s famous question of “Who watches the Watchman.” 

Several years ago, I realized computer applications designed to assess student writing did not understand the essays they evaluated but simply counted proxies such as the length of an essays, the number of sentences in each paragraph, and the frequency of infrequently used words.  In 2014, I and three undergraduate researchers from Harvard and MIT, developed the Basic Automatic B.S. Essay Language Generator, or BABEL Generator that could in seconds generate 500-1000 words of complete gibberish that received top scores from Robo-grading applications such e-rater developed by the Educational Testing Service (ETS).   I was able to develop the BABEL generator because I was already retired and, aside from some consulting assignments, had free time for research unencumbered by teaching or service obligations.  Even more important, I had access to three undergraduate researchers, two from MIT and one from Harvard, who provided substantial technical expertise.  Much of their potential expertise, however, was unnecessary since after only a few weeks of development our first iteration of the BABEL Generator was able to produce gibberish such as

Society will always authenticate curriculum; some for assassinations and others to a concession. The insinuation at pupil lies in the area of theory of knowledge and the field of semantics. Despite the fact that utterances will tantalize many of the reports, student is both inquisitive and tranquil. Portent, usually with admiration, will be consistent but not perilous to student. Because of embarking, the domain that solicits thermostats of educatee can be more considerately countenanced. Additionally, programme by a denouncement has not, and in all likelihood never will be haphazard in the extent to which we incense amicably interpretable expositions. In my philosophy class, some of the dicta on our personal oration for the advance we augment allure fetish by adherents.

 that received high scores from the five Robo-graders we were able to access.

I and the BABEL Generator were enlisted by the Australian Teachers Unions to help the successful opposition to having the national K-12 writing tests scored by a computer.    The Educational Testing Service’s response to Australia’s rejection was to have three of its researchers  publish a study, “Developing an e-rater Advisory to Detect Babel-generated Essays,” that described their generating over 500,000 BABEL essays based on prompts from what are clearly the two essays in the Graduate Record Examination (GRE), the essay portion of the PRAXIS teacher certification test, and the two essay sections of the Test of English as a Foreign Language (TOEFL) and comparing the BABEL essays to 384,656 actual essays from those tests.  The result of this effort was the development of an “advisory” from e-rater that would flag BABEL generated gibberish.  

Unfortunately, this advisory was a solution in search of a problem.  The purpose of the BABEL Generator was to display through an extreme example that Robo-graders such as e-rater could be fooled into giving high scores to undeserving essays simply by including the various proxies that constituted e-rater’s score.  Candidates could not actually use the BABEL Generator while taking one of these tests; but they could use the same strategies that informed the BABEL Generator such as including long and rarely used words regardless of their meaning and inserting long vacuous sentences into every paragraph.

Moreover, the BABEL Generator is so primitive that there are much easier ways of detecting BABEL essays.  We did not expect our first attempt to fool all the Robo-graders we could access to succeed, but because it did, we stopped. We had proved our point.   One of the student researchers was taking Physics at Harvard and hard coded into BABEL responses inclusion of some of the terminology of sub-atomic particles such as neutrino, orbital, plasma, and neuron.  E-rater and the other Robo-graders did not seem to notice.  A simple program scanning for these terms could have saved the trouble of generating a half-million essays.

ETS is not satisfied in just automating the grading of the writing portion of its various tests.  ETS researchers have developed SpeechRater, a Robo-grading application that would score the speaking sections of the TOEFL test.  There is a whole volume of scholarly research articles on SpeechRater published by the well-respected Routledge imprint of the Taylor and Francis Group.  However, the short biographies of the nineteen contributors to the volume list seventeen as current employees of ETS, one as a former employee, and only one with no explicit affiliation.

Testing organizations appear to no longer have a wide range of perspectives, or any perspective that runs counter to their very narrow psychometric outlook.  This danger has long been noted.  Carl D. Brigham, the eugenicist who then renounced the racial characterization of intelligence and the creator of the SAT who then became critical of that test, wrote shortly before his death that research in a testing organization should be governed and implemented not by educational psychologists but by specialists in academic disciplines since it is easier to teach them testing rather than trying to “teach testers culture.”  

The obvious home for such a research organization is the US Department of Education.  Just as the FDA vets the efficacy of drugs and medical devices, there should be an agency that verifies not only that assessments are measuring what they claim to be measuring but also the instrument is not biased towards or against specific ethnic or socio-economic groups.  There was an old analogy question on the SAT (which no longer has analogy items) that had “Runner is to marathon as: a) envoy is to embassy; b) martyr is to massacre; c) oarsman is to regatta; d) referee is to tournament; e) horse is to stable.   The correct answer is c: oarsman is to regatta.   Unfortunately, there are very few regattas in the Great Plains or inner cities.

Education Trust, led by former Secretary of Education John King, sent two letters to the Biden administration, urging the administration not to allow states to receive waivers from the mandated federal testing. The signers of the letters were not the same. As State Commissioner in New York, King was a fierce advocate for Common Core and standardized testing.

Leonie Haimson, leader of Class Size Matters, the Parent Coalition for Student Privacy, and board member of the Network for Public Education, wrote this about the pro-testing coalition assembled by King:

I asked my assistant Michael Horwitz to figure out which organizations were on the first Ed Trust letter pushing against state testing waivers, but not the letter that just came out, advocating against allowing flexibility by using local assessments instead.  National PTA, NAN (Al Sharpton’s group), LULAC, KIPP and a few others did drop off the list. 

I then asked Leonie if she could add the amounts of funding to these organizations by the Gates Foundation and the Walton Foundation and she replied:

The largest beneficiary of their joint funding among these organizations has been KIPP at over $97M, then Ed Trust at nearly $58 million, who spearheaded both letters. Also TNTP at $54M, NACSA at $44M, Jeb Bush’s FEE at nearly $32 M and 50Can at $29M. [TNTP used to be called “The New Teachers Project,” and was created by Michelle Rhee.] Michael Horwitz did the research.

Signers on the first letter:

The following orgs were on the second letter, but not the first: many more obviously pro-charter, right-wing and more local organizations:

Leonie Haimson 
leoniehaimson@gmail.com

Follow on twitter @leoniehaimson 

Host of “Talk out of School” WBAI radio show and podcast at https://talk-out-of-school.simplecast.com/

Three researchers published an article in the Kappan that is highly critical of the edTPA, a test used to assess whether teacher candidates are prepared to teach. Over the years, there have been many complaints about the edTPA, because it replaces the human judgment of teacher educators with a standardized instrument. It’s proponents claim that the instrument is more reliable and valid than human judgment.

Drew H. Gitomer, Jose Felipe Martinez, and Dan Battey disagree. Their article raises serious criticisms of the edTPA.

They begin:

The use of high-stakes assessments in public education has always been contested terrain. Long-simmering debates have focused on their benefits, the harms they cause, and the roles they play in decisions about high school graduation, school funding, teacher certification, and promotion. However, for all the disagreement about how such assessments affect students and teachers, and how they should or should not be used, it has generally been assumed that the assessment instruments themselves follow standard principles of measurement practice.  

At the most basic level, test developers are expected to report truthful and technically accurate information about the measurement characteristics of their assessments, and they are expected to make no claims about those assessments for which they have no supporting evidence. Violating these fundamental principles compromises the validity of the entire enterprise. If we cannot trust the quality of the assessments themselves, then debates about how best to use them are beside the point. 

Our research suggests that when it comes to the edTPA (a tool used across much of the United States to make high-stakes decisions about teacher licensure), the fundamental principles and norms of educational assessment have been violated. Further, we have discovered gaps in the guardrails that are meant to protect against such violations, leaving public agencies and advisory groups ill-equipped to deal with them. This cautionary tale reminds us that systems cannot counter negligence or bad faith if those in position to provide a counterweight are unable or unwilling to do so. 

Background: Violations of assessment principles 

The edTPA is a system of standardized portfolio assessments of teaching performance that, at the time this research was conducted, was mandated for use by educator preparation programs in 18 states, and approved in 21 others, as part of initial certification for preservice teachers. It builds on a large body of research over several decades focused on defining effective teaching and designing performance assessments to measure it. The assessments were created and are owned by Stanford Center for Assessment, Learning, and Equity (SCALE) and are now managed by Pearson Assessment, with endorsement and support from the American Association of Colleges for Teacher Education (AACTE). By 2018, just five years after they were introduced, they were among the most widely used tools for evaluating teacher candidates in the United States, reaching tens of thousands of candidates in hundreds of programs across the country. They have substantially influenced programs of study in teacher education. And for the teaching candidates who take them, they are a major undertaking, requiring them to make a substantial time investment, as well as costing them $300.  

In 2018, two of us (Drew Gitomer and José Felipe Martínez) participated in a symposium at the annual meeting of the National Council of Measurement in Education (NCME), which included a presentation on edTPA by representatives of Pearson and SCALE (Pecheone et al., 2018). We were struck by specific claims that were made in that presentation: Reported rates of reliability seemed implausibly high, and reported rates of rater error seemed implausibly low, implying that a teaching candidate would receive the same scores regardless of who rated the assessment. A well-established feature of performance measures of teaching, similar to those being used in edTPA, is that raters will often disagree on their scores of any single performance and, therefore, the scoring reliability of any single performance is inevitably quite modest. The raw data on rater agreement that edTPA reports are consistent with the full body of work on these assessments. Yet, the reliabilities they reported, which depend on these agreement levels, were completely discrepant from all other past research. 

At the NCME session, we publicly raised these concerns, and we offered to engage in further conversation to clarify matters and address our questions about the claims that were made. Upon further investigation, we found that the information presented at the session was also reported in edTPA’s annual technical reports — the very information state departments of education rely on to decide whether to use the edTPA for teacher licensure.  

In December 2019, we published an article detailing serious concerns about the technical quality of the edTPA in the American Educational Research Journal (AERJ), one of the most highly rated and respected journals in the field of educational research (Gitomer et al., 2019). We argued that edTPA was using procedures and statistics that were, at best, woefully inappropriate and, at worst, fabricated to convey the misleading impression that its scores are more reliable and precise than they truly are. Our analysis showed why those claims were unwarranted, and we ultimately suggested that the concerns were so serious that they warranted a moratorium on using edTPA scores for high-stakes decisions about teacher licensure.  

Then they discovered that members of the Technical Advisory Committee had not met very often.

Our brilliant reader Laura Chapman, retired educator, decided to dig deep into the politics of education reform in Minnesota in response to a post about a dubious constitutional amendment sponsored by the Federal Reserve Bank.

Chapman, who lives in Ohio, writes:

I am not from Minnesota, but this post sent me deep into some policies there. The idea is to frame education as a fundamental right to “quality schools” as “measured against uniform achievement standards set forth by the state.”

No. This law is written as if the standard-setting process is a business-as usual-review of existing standards and benchmarks for learning, with periodic revisions. It is not.

Right now, there is a huge controversy over the social studies standards. The battle is about whose histories count and whether conservatives should settle for anything other than patriotism as the major purpose of teaching American history. https://patch.com/minnesota/across-mn/controversy-over-mn-s-social-studies-standards-explained

Students Learning English (ELLs), are unlikely to pass the absurd requirements being proposed by the Federal Reserve (why bankers?) and as a constitutional amendment (why bankers?).

Minnesota has NO academic tests except those in English. According to a 2020 report from the Migration Policy Institute, and the 2015 American Community Survey, at least 193,600 Minnesota residents have children still learning English. All are in harm’s way. The largest foreign-born groups in Minnesota are from Mexico (67,300), Somalia (31,400), India (30,500), Laos including Hmong (23,300), Vietnam (20,200), China excluding Hong Kong and Taiwan (c), Ethiopia (19,300), and Thailand including Hmong (16,800). One of the fastest growing immigrant groups in Minnesota is the Karen people, an ethnic minority in conflict with the government in Myanmar. Most of the estimated 5,000 Karen in Minnesota came from refugee camps in Thailand. Ojibwe and Dakota are the indigenous languages of Minnesota.

Many of Minnesota’s charter schools are devoted to segregating and strengthening the identities of linguistic/ethnic groups. There are three dual language Spanish-English schools. Eight charter schools are devoted to immersion in these languages/cultures: Chinese, French, German, Korean, Mandarin Chinese, Russian, and Spanish. There are at least five Hmong immersion charter schools, and two for Ojibwe immersion. Two charter schools offer ELL education for East African families and one offers education using American Sign Language/English bilingual approach.

Recent reports also show how charter schools are racially segregated. In St Paul, one hundred percent of students at Higher Ground Academy are black or African-American. This percentage is about the same for Minneapolis’s Friendship Academy. In both cities the overall population of black or African-American residents is below twenty percent. By design, many charter schools in Minnesota are segregated schools. Will these schools be subjected to the wishes of the bankers or not?

In 2021, the Minnesota Federal Reserve, having no expertise in education, called in “experts” to make suggestions on a fix for so-called achievement gaps, meaning differences in scores on standardized tests. This “we-can-fix it” program was sponsored by all 12 of the nation’s District Banks in the Federal Reserve System. In other words, what happens in Minnesota may not be limited to Minnesota but extend to the orbit of District Banks in Atlanta, Boston, Chicago, Cleveland, Dallas, Kansas City, New York City, Philadelphia, Richmond (VA), San Francisco, and St Louis,

Among the highly visible “experts” called in for this multi-state program were Geoffrey Canada, president of the well-endowed Harlem Children’s Zone (endowment about $148 million, and sponsor of Promise Academy brand of K-12 charter schools), and CEO Salman Khan, founder of online Khan Academy, and Kahn Academy for Kids. The papers for this program also featured the post-Katrina takeover of New Orleans schools as if exemplary. https://www.minneapolisfed.org/article/2021/feds-racism-and-the-economy-series-explores-racial-inequity-in-the-education-system.

Bankers are clueless about education but they have an agenda certain to harm thousands of students in Minnesota, especially ELL students, and if applicable to charter schools, the many students ill prepared to take a test only available in English.

The last thing we need to have are the nation’s clueless bankers making permanent changes in education based on proposed Minnesota’s model of “quality.”