In response to another reader, our frequent commenter Krazy TZ offered his reading list:
let me give my advice by [painfully] putting a few books into first, second and third groups. These are based solely on those that have been the most helpful for me because they were well-written and meant for a broad audience, generally jargon-free, shorter and smaller, and [not the least important consideration] cheaper. Other folks may have other picks or rankings.
First group. Someone is just starting out. High-stakes standardized testing and the charterite-privatizer complex built on it seem so, you know, scientific and objective and all. Well, gag me with a spoon why don’t you! If someone reads MAKING THE GRADES: MY MISADVENTURES IN THE STANDARDIZED TESTING INDUSTRY by Todd Farley (2009, paperback) s/he now realizes that the puny man behind the curtain is the real Wizard of Obfuscation, er, Oz. Lively, entertaining, uplifting, heart-breaking. But what about the numbers?!?!? Darrell Huff, HOW TO LIE WITH STATISTICS (paperback, original 1954, reprinted many times, I have the 1993 Norton paperback version). Big help in beginning to demystify stats/numbers and in inoculating one against mathematical intimidation. If you’ve made it through the first two, then on to Banesh Hoffman THE TYRANNY OF TESTING (1964, original 1962, Dover edition 2003). Straightforward explanations of the fundamental problems with high-stakes standardized testing. Relevant to today’s ed debates. And to put a little historical perspective on all this, MANY CHILDREN LEFT BEHIND: HOW THE NO CHILD LEFT BEHIND ACT IS DAMAGING OUR CHILDREN AND OUR SCHOOLS (2004, paperback, includes Deborah Meier, Stan Karp, Monty Neill, Alfie Kohn, Linda Darling-Hammond, and Theodore R. Sizer). The train wreck was anticipated long before it happened.
All paperbacks, all cheap, all written for the non-specialist.
Second group. You now want to get your hands dirty with some of the technical, er, “stuff.” THE MYTHS OF STANDARDIZED TESTS: WHY THEY DON’T TELL YOU WHAT YOU THINK THEY DO by Phillip Harris, Bruce M. Smith, and Joan Harris (hardcover, 2011). I can’t praise this enough; difficult questions and answers made accessible. Now you will really start to understand what Todd Farley and Banesh Hoffman were getting at. Follow that up with Daniel Koretz’s MEASURING UP:WHAT EDUCATIONAL TESTING REALLY TELLS US (2009, paperback) and you can start confounding friend and foe alike with such gems as “differential item functioning” and “reliability is consistency of measurement” and why a percentile is, er, a percentile on a norm-referenced test and what the heck a percentile is in the first place. Koretz is an expert and experienced psychometrician but pretty much blows holes in every major charterite/privatizer claim about high-stakes standardized testing. To continue to fortify yourself in the testing arena, COLLATERAL DAMAGE: HOW HIGH-STAKES TESTING CORRUPTS AMERICA’S SCHOOLS by Sharon L. Nichols and David C. Berliner (2007, paperback). Again, the damage to public education, people young and old, and democracy is laid out in painful detail. And then to further strengthen us non-math majors in the defensive arts when it comes to warding off the evil magic of mathematical intimidation, there’s Daniel Best’s 2012 updated version of DAMNED LIES AND STATISTICS: UNTANGLING NUMBERS FROM THE MEDIA, POLITICIANS, AND ACTIVISTS (hardcover). As a famous [infamous?] Supreme Court Justice might say, ‘Whoop De Damn Do!”
Third group. In a sense I’ve saved the best for the last. If you’ve read the first two sets of books you are in for a real treat. Two paperbacks by the late Gerald Bracey; he died in 2009. Perhaps of him it could be said, “we will not see the like again” or “when they made him they broke the mold.” An absolutely indomitable figure in the ed debates, a fierce defender of public education who I firmly think would have approved heartily of the subtitle of this blog: “A site to discuss better education for all.” Want to know how to lay waste to the morally and intellectually bankrupt arguments of the leading charterites/privatizers? EDUCATION HELL: RHETORIC VS. REALITY (Transforming the Fire Consuming America’s Schools) of 2009, paperback, and READING EDUCATIONAL RESEARCH: HOW TO AVOID GETTING STATISTICALLY SNOOKERED, 2006 paperback. The 2006 book is especially helpful because it embodies a principal well-articulated by one of those old dead white guys “NO problem can withstand the assault of sustained thinking” [Voltaire].
Where would I put Diane’s THE DEATH AND LIFE OF THE GREAT AMERICAN SCHOOL SYSTEM and the soon-to-be-here REIGN OF ERROR? Wherever you like—beginning, middle or end. Perhaps it depends if people are reading them in solitary fashion or as part of a study group.
This list is not exhaustive, even of the books I’ve read. But I think in this case brevity is the soul of, er, usefulness.
I hope this helps.

Outstanding list.
LikeLike
I think I’d add one for background: Gould’s MISMEASURE OF MAN
and a very new one by Jim Horn and Denise Wilburn, THE MISMEASURE OF EDUCATION.
LikeLike
Michael Paul Goldenberg: I respectfully refuse to disagree.
🙂
“Only the educated are free.” [Epictetus]
LikeLike
MPG:
I would be very careful in reading Gould’s Mismeasure of Man. Gould was a very powerful writer and an ever present pundit. Alas he had a very clear worldview and was not as careful as he should have been in assessing the work of others. The following peer-reviewed article points out the errors in Gould’s assessment of the work of Samuel George Morton’s work on cranial capacities.
http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001071
If anyone has found a critique or refutation of this Lewis et al article please let me know.
LikeLike
I will keep on reading.
Thank you for the list!
LikeLike
Years ago in grad school I read Education and the Cult of Efficiency by Raymond Callahan which describes the emergence of the factory school that remains stuck in everyone’s minds today… especially the quants who want to use seemingly exacting mathematical models to measure everything having to do with teaching and learning…
LikeLike
I’ll second Callahan’s book!
LikeLike
Jim Horn’s and Denise Wilburn’s The Mismeasure of Education, definitely.
Also these witty dits, each by David Hutchens and illustrated by Boddy Gombert:
* Outlearning the Wolves: Surviving and Thriving in a Learning Organization
* Shadows of the Neanderthal: Illuminating the Beliefs That Limit our Organizations
* The Tip of the Iceberg: Managing the Hidden Forces That Can Make or Break Your Organization
* The Lemming Dilemma: Living with Purpose, Leading with Vision
* Listening to the Volcano: Conversations That Open Our Minds to New Possibilities
Here is the whole first chapter from Outlearning the Wolves:
“This is a wolf.”
Here is the whole second chapter:
“This is a sheep.”
And the whole third chapter:
“Wolves eat sheep.
“Any questions?”
Hutchens then goes on to tell about a community of sheep resigned to the “fact of life” that wolves always have and always will eat sheep. Until, that is, one badass sheep (BAS) says, in effect, “Hey, wait a minute, here! Fact of life? Really? Seems to me there’s something we might go learn!” Ultimately the one BAS proves the spark that awakens his community to the reality that “wolves eat sheep” is more the community’s belief and less a fact of life. In the end, the community of sheep outlearns the wolves.
But then wolves also learn.
LikeLike
How very, very wonderful to see these lists! Bravo!!! Keep them coming!!!
LikeLike
I’m not sure that E D Hirsch approved the list, however, or any additions to it. . . Oh, my!
LikeLike
?
LikeLike
I just started reading Bracey’s Reading Educational Research: How to Avoid Getting Statistically Snookered. Actually, I’ve been skimming through and reading choice bits before going at it cover to cover. It’s wonderful.
Everywhere one looks in ed deform, these days, one sees marketing/propaganda pieces presented as “studies.”
These invariably have the high-end production values of, say, a Hermes catalog or a British Petroleum annual report,
and they are inevitably filled with photos of happy teachers who just LOVE teaching to the test and having their autonomy taken from them and
photos of happy children who just LOVE doing that test prep on standard RI4.6.3a and filling in those bubbles.
The stock photo companies must be making a fortune licensing those pics to the deform groups. But, of course, money isn’t an issue for the deformers, who are even wealthier than I am pretty.
: )
At any rate, Bracey’s book is a delightful expose of deformer junk science. I highly recommend it.
LikeLike
Robert D. Shepherd: moments ago I finished my rereading of Banesh Hoffman’s THE TYRANNY OF TESTING (1964).
I think you will appreciate his closing paragraph, pp. 216-217:
“All methods of evaluating people have their defects—and grave defects they are. But let us not therefore allow one particular method to play the usurper. Let us not seek to replace informed judgment, with all its frailty, by some inexpensive statistical substitute. Let us keep open many diverse and non-competing channels towards recognition. For high ability is where we find it. It is individual and [217] must be recognized for what it is, not rejected out of hand simply because it does not happen to conform to criteria established by statistical technicians. In seeking high ability, let us shun over-dependence on tests that are blind to dedication and creativity, and biased against depth and subtlety. For that way lies testolatry.”
Whether one agrees with much or any of what Hoffman argues in his book, at the very least a close reading of it should provoke thoughtful consideration of issues very much still with us.
Why do we need to keep thinking and rethinking supposedly “settled” questions?
“We can’t solve problems by using the same kind of thinking we used when we created them.” [Albert Einstein]
🙂
P.S. You can’t go wrong reading Gerald Bracey’s READING EDUCATIONAL RESEARCH (2006). As a kind of shorthand guide to his exposition he lists and explains 32 “Principles of Data Interpretation.” Most are dynamite. Three of my favorites: “5. Be sure the rhetoric and the numbers match.”; “7. Beware of simple explanations for complex phenomena.”; and “23. If a situation really is as alleged, ask, ‘So what?’”
As an addendum to #7 he quotes H. L. Mencken: “For every complex problem there is an answer that is clear, simple and wrong.”
🙂
LikeLike
LOL! Wonderful!
LikeLike
That’s just awesome, Krazy! Thank you for making my evening.
We should miss no opportunity to call these people out on their misuses of statistics. Statistics don’t lie, but people lie (or fool themselves) all the time with statistics and measurement techniques that they don’t understand.
And here, once again, for your pleasure, this choice bit from Albert Einstein, quoted on Susan Ohanian’s website:
“I believe in standardizing automobiles. I do not believe in standardizing human beings. Standardization is a great peril which threatens American culture. . . . Such men [as Henry Ford] do not always realize that the adoration which they receive is not a tribute to their personality but to their power or their pocketbook.
—— Albert Einstein, Saturday Evening Post interview, 10/26/1929″
Ford, BTW, was a big fan of eugenics. See Edwin Black’s brilliant War against the Weak, one of the finest books I’ve read in years.
LikeLike
“Let us keep open many diverse and non-competing channels towards recognition. For high ability is where we find it. It is individual.”
I would strike the word “high” from that–the statement doesn’t need that qualifier–but otherwise, it’s magnificent. yes yes yes yes yes
LikeLike
Robert D. Shepherd: I fear I am trying the patience of the owner of this blog, but I feel compelled to take up a bit more space on this blog with a wondrously revealing quote from Stephen Jay Gould’s THE MISMEASURE OF MAN (1996 updated edition).
To avoid the charge of misusing Gould’s work, I must mention that his primary target was biological determinism and the pernicious social injustices justified and defended by those who defended biodeterminism as ‘good science.’ However, note that he spends almost a hundred pages on a section he entitled “The Hereditarian Theory of IQ: An American Invention” in which he discusses Alfred Binet, H.H. Goddard, Lewis M. Terman, R.M. Yerkes, and C.C. Brigham, i.e., the pioneers of the ‘mental testing’ industry that has led to the current plague of high-stakes standardized tests.
Ok, with that out of the way, and keeping in mind the actual [not theoretical or imaginary] nature and uses of standardized tests to label, sort and rank:
“We pass through this world but once. Few tragedies can be more extensive than the stunting of life, few injustices deeper than the denial of an opportunity to strive or even to hope, by a limit imposed from without, but falsely identified as lying within.” [pp. 60-61]
Wow.
🙂
LikeLike
I have not read any of Bracey’s books. I just read one of his last articles: http://www.ascd.org/publications/educational-leadership/nov09/vol67/num03/The-Big-Tests@-What-Ends-Do-They-Serve%C2%A2.aspx
I am not impressed. I won’t go into everything but the first thing that caught my attention was that on the TIMSS data he used ranks. So I wondered about the actual scores.
Here they are the TIMSS 2007 Math Scores for 4th Graders:
TIMSS scale average 500
Hong Kong 607
Singapore 599
Chinese Taipei 576
Japan 568
Kazakhstan2 549
Russian Federation 544
England 541
Latvia2 537
Netherlands3 535
Lithuania2 530
United States4, 5 529
Germany 525
Denmark4 523
And for 8th Graders:
TIMSS scale average 500
Chinese Taipei 598
Korea, Rep. of 597
Singapore 593
Hong Kong 572
Japan 570
Hungary 517
England4 513
Russian Federation 512
United States4, 5 508
Lithuania2 506
Czech Republic 504
http://nces.ed.gov/timss/table07_1.asp
The actual scores present a different picture to that of rankings – as anyone who has read How to Lie with Statistics should know. The actual scores indicate that the US is considerably behind some of its key economic competitors.
Note that I am not saying anything about the adequacy of these tests or whether International comparisons are always reasonable. I am saying that those who criticize how others use data should pay attention to their own strictures.
LikeLike
Gerald Bracey, READING EDUCATIONAL RESEARCH: HOW TO AVOID GETTING STATISTICALLY SNOOKERED (2006).
Principles of Data Interpretation:
“5. Be sure the rhetoric and the numbers match.”
“9. Be aware of whether you are dealing with rates or numbers. Similarly, be aware of whether you you are dealing with rates or scores.”
“10. When comparing either rates or scores over time, make sure the groups remain comparable as the years go by.”
“11. Be aware of whether you are dealing with ranks or scores.”
“23. If a situation really is as alleged, ask, ‘So what?’”
“25. Rising test scores do not necessarily mean rising achievement.”
“28. Make certain that descriptions of data do not include improper statements about the type of scale being used, for example, ‘The gain in math is twice as large as the gain in reading.’”
“29. Do not use a test for a purpose other than the one it was designed for without taking care to ensure it is appropriate for the other purpose.”
“30. Do not make important decisions about individuals or groups on the basis of an individual test.”
In the article accessed by the link [see below], Gerald Bracey follows his own advice.
Link: http://www.ascd.org/publications/educational-leadership/nov09/vol67/num03/The-Big-Tests@-What-Ends-Do-They-Serve%C2%A2.aspx
How does it feel to lose an argument with a dead man?
You should never have followed Marx’s advice:
“The secret of life is honesty and fair dealing. If you can fake that, you’ve got it made.”
Groucho, that is.
🙂
LikeLike
KrazyTA:
In what way does your comment respond to the point that I raised. You simply asserted a fact.
I agree with each of the points that you cite Bracey as having made. In the article I referenced he simply did not follow his own advice when writing the article because the actual TIMSS scores tell a different story than the one he argued.
LikeLike
Bernie, if you correct these results for the socioeconomic status of the students taking the tests, then we are at the top or near it.
Here, from Martin Carnoy and Richard Rothstein, “What Do International Tests Really SHow about U.S. Student Performance.”
Because social class inequality is greater in the United States than in any of the countries with which we can reasonably be compared, the relative performance of U.S. adolescents is better than it appears when countries’ national average performance is conventionally compared.
Because in every country, students at the bottom of the social class distribution perform worse than students higher in that distribution, U.S. average performance appears to be relatively low partly because we have so many more test takers from the bottom of the social class distribution.
A sampling error in the U.S. administration of the most recent international (PISA) test resulted in students from the most disadvantaged schools being over-represented in the overall U.S. test-taker sample. This error further depressed the reported average U.S. test score.
If U.S. adolescents had a social class distribution that was similar to the distribution in countries to which the United States is frequently compared, average reading scores in the United States would be higher than average reading scores in the similar post-industrial countries we examined (France, Germany, and the United Kingdom), and average math scores in the United States would be about the same as average math scores in similar post-industrial countries.
A re-estimated U.S. average PISA score that adjusted for a student population in the United States that is more disadvantaged than populations in otherwise similar post-industrial countries, and for the over-sampling of students from the most-disadvantaged schools in a recent U.S. international assessment sample, finds that the U.S. average score in both reading and mathematics would be higher than official reports indicate (in the case of mathematics, substantially higher).
This re-estimate would also improve the U.S. place in the international ranking of all OECD countries, bringing the U.S. average score to sixth in reading and 13th in math. Conventional ranking reports based on PISA, which make no adjustments for social class composition or for sampling errors, and which rank countries irrespective of whether score differences are large enough to be meaningful, report that the U.S. average score is 14th in reading and 25th in math.
Disadvantaged and lower-middle-class U.S. students perform better (and in most cases, substantially better) than comparable students in similar post-industrial countries in reading. In math, disadvantaged and lower-middle-class U.S. students perform about the same as comparable students in similar post-industrial countries.
At all points in the social class distribution, U.S. students perform worse, and in many cases substantially worse, than students in a group of top-scoring countries (Canada, Finland, and Korea). Although controlling for social class distribution would narrow the difference in average scores between these countries and the United States, it would not eliminate it.
U.S. students from disadvantaged social class backgrounds perform better relative to their social class peers in the three similar post-industrial countries than advantaged U.S. students perform relative to their social class peers. But U.S. students from advantaged social class backgrounds perform better relative to their social class peers in the top-scoring countries of Finland and Canada than disadvantaged U.S. students perform relative to their social class peers.
On average, and for almost every social class group, U.S. students do relatively better in reading than in math, compared to students in both the top-scoring and the similar post-industrial countries.
LikeLike
Robert:
Thanks for the civil and substantive response.
Clearly you are correct and the structure of the sample is a major consideration in these type of comparisons. One really does have to be concerned with apples to apples comparisons. For example, given that we are looking at math scores and given the continuing differential performance of male and female students, clearly the sample need to be similar on this factor as well.
My issue, however, is that Bracey simply should have included the actual scores and then he could have made your point – assuming that there is actual data on the nature of the TIMSS samples. If there is not actual data on the structure of the samples then Bracey would have engaged in hand waving. Address the issue or do not address the issue, but, as Bracey would undoubtedly and legitimately argue, do not cherry pick. The size of the gap between the top countries and the US is not trivial, Bracey knew it and should have acknowledged it.
Thanks for the reference. I will take a look at Carnoy and Rothstein assuming it is not pay walled.
LikeLike
Robert D. Shepherd: thank you for having the patience to continue this thread, illustrating in sober detail #4 of Gerald Bracey’s PRINCIPLES OF DATA INTERPRETATION (2006):
“When comparing groups, make sure the groups are comparable.”
I also refer interested readers of this blog to a recent discussion on this blog entitled “What if the International Tests Are Wrong?” and the comments section.
It deals with the PISA [Programme for International Student Assessment, conducted by the OECD].
Link: https://dianeravitch.net/2013/08/20/what-if-the-international-tests-are-wrong/
I also remind interested readers that whatever trusthworthiness there may be in standardized tests, it evaporates like dew in the current S CA sun when you have non-standardized administration and scoring of same.
Keep on posting.
🙂
LikeLike
Krazy:
I am not sure I follow. Are you saying that there was a problem with the comparability of the TIMSS samples across countries and/or the administration of the TIMSS assessment? If so, do you have a reference and an indication of its effects? If not, what is the relevance of introducing the PISA example? One can always surmise flaws in the design of a sample or the administration of an experiment. Serious critiques, however, require that you present data to support the assumption.
LikeLike
If I may add a couple of books not directly related to public education in the USA but very good reads to provide some depth to analysis. First is Andre Comte-Sponville’s “A Small Treatise on the Great Virtues”. The second is “Truth: A Guide” by Simon Blackburn.
LikeLike
Speaking of not directly related to public education, we mustn’t forget Naomi Klein’s THE SHOCK DOCTRINE.
LikeLike
I’ll add two. David Berliner’s, The Manufactured Crisis (1995) and The Art of War by Sun Tzu.
LikeLike
And, KTA, although not a book, I would be remiss and the ghost of Don Quixote would quickly come to dispatch with me, if I didn’t mention for the umpteenth time on this blog the most important writing that destroys the very basis of educational standards, standardized testing and the ensuing sorting and separating of students through “grading” them, Noel Wilson’s “Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700
Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine. (updated 6/24/13 per Wilson email)
1. A quality cannot be quantified. Quantity is a sub-category of quality. It is illogical to judge/assess a whole category by only a part (sub-category) of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as one dimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing we are lacking much information about said interactions.
2. A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
3. Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
4. Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other word all the logical errors involved in the process render any conclusions invalid.
5. The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. As a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
6. Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
7. And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it measures “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self evident consequences. And so the circle is complete.”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
LikeLike
I would like to respectfully add, The Teachers’ Lounge (Uncensored): A Funny, Edgy, Poignant Look at Life in the Classroom. The book honors teachers and recognizes their challenges, takes policymakers to task for their boneheaded legislation, and has a strong anti-testing message. It’s a fun read with a pointed message.
It also includes a kickass Foreword by Nancy Carlsson-Paige.
LikeLike
I just bought a copy of MYTHS OF STANDARDIZED TESTS to give to my state Representative…even if he doesn’t read it, it’ll be there, staring at him. Have only read four books on the list, including Diane’s DEATH AND LIFE. I will work my way though these.
LikeLike
I think this is a good place to plug my favorite used-book site.
http://www.alibris.com/
Independent sellers list their wares in the centralized data base, so you can compare editions and prices. There are customer ratings for the dealers, so you can choose a reliable one.
LikeLike
chemtchr:
Good point. I have had really good luck with http://www.abebooks.com. It appears to have a wider collection of used booksellers – although I think that there is considerable overlap.
LikeLike
Have none of you read “Teach the Best and Stomp the Rest: The American Schools . . . Guilty as Charged?” Trafford Publishing, hard cover, paperback or e-book. (568 pages)
(charter schools, high stakes testing, merit pay for teachers, school choice, school vouchers, private management of public schools, No Child Left Behind, Race To the Top, 21 pithy editorial cartoons, and much more.).
LikeLike