Archives for the month of: August, 2017

The hits just keep on coming!

Betsy DeVos just hired a former dean from for-profit online DeVry University to police fraud in higher education.

DeVry was forced to pay $100 million for defrauding students.

Which is the incompetent? Which is the malevolent? Or are they both? Neither is qualified by experience or temperament for the jobs they hold. This story was posted on the Politico website:


The controversial attorney who runs the Education Department’s civil rights division cited her work attacking Bill and Hillary Clinton at the top of her resume when she applied to work for President Donald Trump, according to a copy of the document obtained by POLITICO.

Candice Jackson, who brought a group of women who had accused President Bill Clinton of sexual misconduct to a presidential debate last year between Trump and Hillary Clinton, listed that event as one of her “top five qualifications” for working in the administration.

At the Education Department, Jackson has taken a prominent role helping Education Secretary Betsy DeVos shape federal policy pertaining to protections for transgender students and the handling of campus sexual assault cases. She drew fire in June for telling The New York Times that 90 percent of campus sexual assault cases “fall into the category of ‘we were both drunk.'”

On her resume, Jackson noted that she had steadfastly attacked Hillary Clinton’s “lifelong corruption and hypocritical claim to defend women and children” in ads and videos and brought a “unique perspective due to also being a gay Republican.”

Jackson joined the Education Department in the spring.

POLITICO obtained the resume from American Oversight, a watchdog group that acquired it using a Freedom of Information Act request. It’s not clear whether the document was submitted directly to the Education Department or by another means, such as to the Trump transition team.

Melanie Sloan, senior adviser at American Oversight, said Jackson’s hiring is an example of Trump’s “clear pattern of filling important roles in his administration with ideologues and political hacks.

“Nowhere is this more evident than at the Department of Education, where Secretary DeVos — despite a total absence of experience in management or education policy — now oversees thousands of employees and over $60 billion in taxpayer money,” Sloan said.

When reached by telephone, Jackson referred questions to the Education Department’s press office, which did not respond to questions.

DeVos has previously defended Jackson as “a valuable part of the administration and an unwavering advocate for the civil rights of all students.”

Reposted: new link.

John Merrow recalls an anti-Semitic incident on the playing fields from his youth. He recently heard from the boys (men) involved and found that their views were unchanged, except that now the anti-Semite was now openly racist.

Remember the song in “South Pacific”? “You’ve got to be carefully taught” to hate. We aren’t born hating. At the time Rodgers and Hammerstein wrote that song, they were called Communists.

John’s post reminded me of an incident last week. I went to a splendid wine-tasting and dinner at Paumonok Vineyards on the North Fork of Long Island. I was sitting next to a very pleasant and intelligent young man. As we got into dinner, we inevitably reached the subject of politics, and he told me that he enthisuasically voted for Trump. He is certain that Democrats want socialism and the next step is Communism. I learned that he is the son of Italian immigrants and an engineer who went to a state university. He saw no contradiction in Trump’s anti-immigrant rhetoric or his contempt for public education. As we talked, he expressed resentment about the lazy people who were getting government benefits. Why should he be taxed to pay for them? The longer the conversation went on, the more I realized that he was expressing deepseated racism. When the subject turned to education, he made clear that in his view, teachers are ignorant, have an easy job, are overpaid, should not have unions or tenure or pensions. Nothing I said changed any of his beliefs. I wondered why he was so bitter. I never found out. He is a solid member of Trump’s base.

The New America Foundation released three emails in response to articles about the firing of Barry Lynn, who was planning an event critical of Google, the major funder of the think tank. Lynn is a specialist in the danger of monopolies.

I read them, and it sure looks like Lynn was fired for offending Google. At one point, Anne-Marie Slaughter, president of NAF, mentions a meeting including Susan Molinari, a former GOP member of Congress from New York who is currently a registered lobbyist for Google.

Message to the New America Foundation: When you are in a hole, stop digging.

Fred Smith is a testing expert who knows how test scores can be manipulated and statistics can be twisted into data pretzels.

In this post, he calls out Mayor de Blasio for hyping the numbers to make the gains far larger than they were. Leave aside for the moment that test scores are a ridiculous way to measure the quality of education. Leave aside the fact that using them as measures of progress feeds into the privatizers’ narrative. Smith caught the Mayor juking the stats for Political gain.

He writes:

Ignore that tall man behind the curtain as he cranks up the volume.

Bearing a strong resemblance to Mayor de Blasio, he is there to proclaim that, “Since 2013, English proficiency has increased by 54 percent and math proficiency has increased by 27 percent.” But the noise machine can’t hide the fact that there is little substance in all the thunder.

CityViews are readers’ opinions, not ours. Send us your op-ed today!
So, the mayor’s Tuesday press release leads with huge gains in reading and math scores—the major, if-you-don’t-remember-anything-else point he wants us to take away as he seeks re-election.

But the percentage gains are statistical smoke that befogs the mayor’s already clouded efforts in education. And, frankly, they raise questions about the incumbent’s honesty.

Three tricks prop up the testing headline:

1. The DOE press release emphasizes percentage gains, which are current results minus previous results divided by previous results. Evidently, the increase in English scores of 14.2 percent (26.4 percent to 40.6 percent) from 2013 to 2017 wasn’t good enough news. Nor was the 8.1 percent gain (29.6 percent to 37.8 percent) in math. So, the press office reaches into its bag of tricks and insists there has been a 54 percent gain in English proficiency under de Blasio—14.2 divided by 26.4 and a 27 percent boost in math—8.1 over 29.6.

Now, can you imagine the mayor doing this if there had been an increase in the murder rate. Let’s say homicides were up from 6 to 7 killings per 100,000 New Yorkers. Would de Blasio say that murders rose by one percent or by 16.7 percent? You know he would minimize the negative outcome.

2. – De Blasio’s spinners also present 2013 as their baseline year. But Mayor Bloomberg owned the 2013 results and most of 2014’s, as well. De Blasio didn’t arrive at City Hall until January 1, 2014. The English test was given on April 1, 2014.

Why would they go back to 2013? It allows de Blasio to start his story the year the ELA and math results tanked–creating a fictional narrative of tremendous achievement. For 2013 was the year the Common Core-aligned tests descended on the schools and rained rigor down on 440,000 New York City students. De Blasio wants to embrace Bloomberg’s bottomed-out, third-term school years as his starting point, because things could only improve after that.

Had the Mayor begun his account with the 2015 results, he would still have a 10.2 percent increase to boast about in English proficiency (from 30.4 percent to 40 percent6 percent), but only a 2.6 percent gain to show in math (35.2 percent to 37.8 percent) under his control of the schools. That would be nothing to brag about.

Ironically, as he notes, Joel Klein too tried to claim credit for test score increases that occurred before he took office.

Sad that test scores are now a political talking point. Just proves how meaningless they are.

Johann Neem is a historian who specializes in the study of colonial American education. He wrote the following wonderful post that summarizes his excellent new book, “Democracy’s Schools: The Rise of Public Education in America.”

https://www.washingtonpost.com/news/made-by-history/wp/2017/08/20/early-america-had-school-choice-the-founders-rejected-it/?utm_term=.6adfbd195bae

He rejects the Trump-DeVos conception of education as a private choice.

He writes:

“this conception of public education ignores our collective interests as a society. America’s public schools developed because after the Revolution, Americans realized that leaving education to parental whims and pocketbooks created vast inequalities and could not ensure an educated citizenry. A return to this type of system threatens to exacerbate educational inequality, which already plagues modern America and weakens our democracy. The Founding Fathers saw freedom as the cornerstone of the nation and public schools as essential vehicles to secure it. Guided by their vision, we should work to fix America’s public schools, not abandon them.

“During the Colonial era and into the early American republic, most Americans shared DeVos’s notion that education was a family responsibility. Parents who could afford it taught their children at home, hired itinerant men or women who “kept” school for a fee, or sent older children to charter schools called academies. Most Americans had little formal schooling.

“The Revolution transformed how some Americans thought about education. These Americans agreed with Thomas Jefferson that the future of the republic depended on an educated citizenry. They also believed that the opportunities offered by schooling should be available to rich and poor alike. Many state constitutions included clauses like Georgia’s in 1777: “Schools shall be erected in each county, and supported at the general expense of the State.” But how to execute this directive? The best way, American leaders ultimately concluded, was to encourage local public schools and to limit the growth of academies.

“As early as the 1780s, Massachusetts Gov. Samuel Adams asserted that academies increased inequality because well-off families chose them over local district schools. Citizens, Adams argued, “will never willingly and cheerfully support two systems of schools.” Others shared his concern. New York Gov. George Clinton argued in 1795 that academies served “the opulent” and that all children deserved access to “common schools throughout the state.”

“Adams and Clinton identified a fundamental problem. If too many parents opted out, education would remain a private good, parceled out on the basis of economic means. Reformers, by contrast, hoped to convince Americans that education was a public good and that everyone benefited from high-quality public schools. It would not happen naturally, as Pennsylvania schools superintendent Francis Shunk observed in 1838: “It may not be easy to convince a man who has educated his own children in the way his father educated him, or who has abundant means to educate them, or who has no children to educate, that in opposition to the custom of the country and his fixed opinions founded on that custom, he has a deep and abiding concern in the education of all the children around him, and should cheerfully submit to taxation for the purpose of accomplishing this great object.”

“Horace Mann, secretary to the Massachusetts Board of Education in the 1830s, believed the only solution was to make every family a stakeholder in the public schools. Wealthier families would invest in other people’s children only if their own children attended the same schools and benefited from them. If some families decided to “turn away from the Common Schools” and send their children to a “private school or the academy,” poorer children would end up with a second-class education. To ensure that students and their parents came together as a public, “there should be a free school, sufficiently safe, and sufficiently good, for all the children” in every district. The constituency for the public schools would be forged through the schools themselves as more and more Americans sent their children to them and became devoted to their success.

“And it worked. As more and more families enrolled in the public schools, Americans developed a commitment to sustaining them. By the Civil War, most Northern states offered tuition-free, tax-supported common schools…

“Americans invested in educating one another’s children when most families had a stake in their local schools. The schools themselves fostered this commitment: The public good was not sustained by abstract principles alone but through actual institutions and investments. Certainly, parents have an obligation to look out for their children’s interests, as DeVos observes. Yet, unlike DeVos, Mann did not see this obligation as conflicting with a devotion to public schools. Every family benefited from successful public schools, not just society. But Mann recognized that widely attended public schools would also encourage Americans to fulfill their democratic obligations to one another. Making education a public good was one of the hard-fought victories of reformers after the Revolution, one that safeguarded the spirit of the Revolution and that now risks being reversed.”

Robert Shepherd is a teacher, curriculum developer, author, and much more. Here is how he describes himself on his website:

“Interests: curriculum design, pedagogical approaches, assessment, educational technology, learning, open source and crowd sourced educational materials, linguistics (syntax, semantics, child language acquisition, history of writing systems), hermeneutics, rhetoric, philosophy (Continental philosophy, Existentialism, metaphysics, philosophy of language, philosophy of mind, epistemology, ethics), classical and jazz guitar, poetry, the short story, theater, archaeology and cultural anthropology, prehistory, cultural history, history of ideas, sustainability, Anglo-Saxon literature and language, systems for emergent quality control, heuristics for innovation.”

You can download the piece here.

On the Pseudoscience of Strategies-Based Reading Comprehension Instruction, or What Current Reading Comprehension Instruction Has in Common with Astrology

Bob Shepherd

Permit me to start with an analogy.


As a hobby, I make and repair guitars. This is exacting work, requiring
precise measurement. If the top (or soundboard) of a guitar is half a millimeter
too thin, the wood may crack along the grain. If the top is half a millimeter too
thick, the guitar will not properly resonate.  For a classical guitar
soundboard made of Engelmann spruce (the usual material), the ideal thickness is
between 1.5 and 2 mm, depending on the width of the woodgrain. However,
experienced luthiers typically dome their soundboards, adding thickness (about
half a millimeter) around the edges, at the joins, and in the area just around
the soundhole (to accommodate an inset, decorative rosette and to compensate
for the weakness introduced by cutting the hole).

To measure an object this precisely, one needs good measuring equipment. To measure around the soundhole, one might use a device like this, a Starrett micrometer that sells for about $450:

Picture1

It probably goes without saying
that one doesn’t use an expensive, precision tool like this for a purpose for
which it was not designed. You could use it to tap in frets, but you
wouldn’t want to, obviously. It wouldn’t do the job properly, and you would
end up destroying both the work and the tool.

But that’s just what Reading
teachers and English teachers are now doing, almost universally, when they
teach “reading comprehension.” They are applying astonishingly sophisticated
tools—the minds of their students—in ways that they were not designed to work,
and in the process, they are doing significant damage.
To understand why
the default method for teaching reading comprehension now being implemented in
our classrooms fails, utterly, to work, one has to understand how the internal
mechanism for language is designed to operate.

Linguists, since Chomsky, use the
phrase Language Acquisition Device, or LAD, to refer to the
innate mechanism—hardwired into the human brain—for learning spoken language,
including grammar and vocabulary. The parts of the brain that carry out this learning
work coordinate with other parts of the brain that do pattern recognition (for
decoding) and long-term storage and retrieval of knowledge about the world (for
recognizing context and reference) to form the complex mental tool for
comprehending texts. Sadly, English teachers, reading teachers, curriculum
coordinators, education professors, test makers, the leaders of textbook
companies, and the bureaucrats and politicians who mandate state testing of
reading typically understand almost nothing of how the internal tools for
comprehending a text work,
for almost to a person, they know very little of
contemporary linguistic and cognitive science, and so instead of basing their
instruction and assessment on those sciences, they fall back on unexamined folk
ideas—established habits of the tribe—and implement instruction and
assessment that can most charitably be described as prescientific folk-theory
and superstition.
The internal tool—the language mechanism of the student
brain—is not designed to work in the ways in which “reading comprehension
specialists” are asking students to use it. Ironically, the person with the
doctorate in Reading Comprehension from an education school is the one most
likely to be wedded to prescientific techniques (pedagogy) and materials
(curricula and assessments). Such people direct reading instruction in our
schools, and the result is predictable: kids who don’t read on their own,
typically, for pleasure because they can’t—because for them reading is too
difficult to be enjoyable.

The Persistence of the
Prescientific

In the past fifty years, we have
had dramatic scientific revolutions in both linguistics and in cognitive
psychology. We now know a lot about how language and knowledge are acquired and
about how people make sense of texts, and almost none of this science
has made its way into instructional techniques and materials.
A digression
on the history of physics will provide an illuminating contrast to the current
state of reading instruction.

In their
brilliant and accessible little book The Evolution of Physics, Albert
Einstein and Leopold Infeld describe how Galileo used a thought experiment to
overturn the 1700 -year-old Aristotelian notion that objects in motion contain
a motive force that is used up until they come to rest. Galileo imagined using
oil to make the object travel further. Then he imagined using a perfect oil
that would perfectly reduce the friction acting on the object. The result was
the idea codified in Newton’s First Law of Motion: objects don’t move until
they use up their force; instead, they persevere indefinitely (forever) in
uniform motion until they are acted upon by an external force that changes
their motion. The Aristotelian notion is entirely intuitive. The Galilean/Newtonian
notion is quite counterintuitive. However, the Aristotelian notion is false,
and the Galilean/Newtonian one true. Aristotle’s was a prescientific folk
theory of motion. It made sense to people, but it was wrong, wrong, wrong, and
the development of modern technology and science was not possible until it was
overthrown.

Other folk and
pseudo-scientific theories of physics and astrophysics have held sway
throughout the centuries—the theory that fire is the release, during
combustion, of an element called phlogiston; the theory that light propagates
as waves in an invisible medium called the ether that fills space; the theory
that heavier objects fall faster than light ones do; the theory that the Earth
is flat; the theory that the sun travels across the sky; the theory that the
Earth is at the center of the universe and that planets revolve around it in
epicycles—spheres within spheres. All these notions made sense based on
people’s everyday observations and their intuitive thinking about those observations,
and all were absolutely wrong.

Now, imagine that
you go into a high school in the United States today (in 2016), pick up an
introductory physical science text, and find that it teaches the Aristotelian
theory of motion, the phlogiston theory of fire, and the waves-through-ether
theory of light propagation. Suppose the space science textbook in that school
teaches that a flat earth sits at the center of the universe and that planets
travel around it in epicycles. You would be shocked, appalled, scandalized.
But, of course, this would never happen. Our physics textbooks try to teach
elementary contemporary physics.

But walk into
almost any K-12 school in the United States today and you will find instructional
and assessment techniques and materials that are built upon prescientific, folk
theories of grammar, vocabulary acquisition, and reading comprehension that are
completely at odds with our contemporary scientific understandings of these. Walk
into teacher training institutions and you will find, typically, that the
prescientific, folk theories are the ones being taught. Pick up any state or
district interim reading assessment, and you will find that they were built on
these folk theories.

What Reading Comprehension
Involves

In order to read a text with
comprehension, one needs to be able to

1.     Interpret
automatically (unconsciously and fluidly) the symbols being used (the phonics
component of decoding). Note that this crucial precursor for reading
comprehension requires that the student be able to recognize, quickly and
without effort and, indeed, without conscious rehearsal of the fact that they
are doing so, the roughly 42 separate sound-symbol correspondences of written
English.

2.     Parse
automatically (unconsciously and fluidly) the syntax of the sentences (the
grammar component of decoding). Note that this crucial precursor for reading
comprehension requires that the student be able to parse, quickly and without
effort and, indeed, without conscious rehearsal of the fact that they are doing
so, many thousands of syntactic forms.

3.     Interpret
the meanings of the words and phrases (based on what they refer to and how they
are used in relevant real-world contexts). Note that this crucial element of
reading comprehension depends, fundamentally, upon specific world knowledge in
the domain that the text treats. A contemporary philosophy text might use words
like defeasible, propositional calculus, modal operator, zombie, counterfactual,
supervenience, indexicality,
and grue, and one will have to know how
these words are used in contemporary philosophy and quite a bit about how they
are related to one another in order to understand the text at all.

4.     Recognize
the kairos, or total context, of the text (its who, what, where, when, why, and
how, including its genre, or type, the concerns of its author, and its literary
and rhetorical conventions). Note that this crucial element of reading
comprehension depends upon prior experience with similar extra-textual
elements.

Let’s look at each of these in turn
to learn what science now tells us about them and how our reading comprehension
instruction goes wrong.

Phonics and Reading
Comprehension

This is the one bright spot in our
reading instruction, an area where practice has caught up to scientific
understanding. However, it’s taken us a while to get there. In the middle of
the last century, we were using what is known as the “Look-Say” method for
teaching kids to decode texts. This method was enshrined in such curricula as
the Dick and Jane readers. The method was based on a now-discredited
Behaviorist theory that saw language learning as repeated exposure to
increasingly complex language stimuli paired with ostensive objects (in the
case of the Dick and Jane readers, with illustrations). See Dick run. Dick runs
fast. See, see, how fast Dick runs. The theory of language learning by mere
association of the stimulus and its object dates all the way back to St.
Augustine, who wrote in his Confessions:

When grown-ups named some object and at the same time turned towards
it, I perceived this, and I grasped that the thing was signified by the sound
they uttered, since they meant to point it out. . . . In this way, little by
little, I learnt to understand what things the words, which I heard uttered in
their respective places in various sentences, signified. And once I got my
tongue around these signs, I used them to express my wishes.[1]

It’s an intuitive theory, like the
theory that moving objects use up their force until they stop, but like that
theory, it’s wrong. Look-Say was a flawed approach because it was based on a
false theory of how language was acquired. The fullest exposition of that flawed
theory can be found in B.F. Skinner’s Verbal Behavior (1957). In 1959,
Noam Chomsky, who has done more than anyone to create a true science of
language learning, delivered a devastating blow to behaviorist theories of
language learning in a seminal review of Skinner’s book.[2] Basically, Chomsky described
aspects of language, such as its embedded recursiveness and infinite
generativity, that, like jazz improvisation, cannot be explained solely on the
basis of responses to stimuli. More about the Chomskian revolution later.

Toward the end of the last century,
hundreds of thousands of educators around the country embraced something called
“Whole Language instruction.” Proponents argued that it wasn’t necessary to
teach kids sound-symbol correspondences because language was learned
automatically, in meaningful contexts. The idea was that one simply had to
expose kids to meaningful language at their level, and the decoding stuff would
take care of itself, in the absence of explicit decoding instruction. Those states
and school districts that adopted Whole Language approaches saw their students’
reading scores fall precipitously. Education is given to such fads and to such
disastrous results.

A little knowledge of linguistic
science would have prevented the debacle that was Whole Language. The best
current scientific thinking is that language emerged some 50,000-to-70,000
years ago. For many thousands of years, people learned to use spoken language
without explicit instruction. However, writing is a relatively recent
phenomenon. It’s been around for only about 5,000 years (It emerged in
Mesopotamia around 3,000 BCE, in China around 1,200 BCE, and in Mesoamerica
around 600 BCE). Both the Look-Say and Whole Language proponents failed to
recognize that spoken language has been around long enough for brains to evolve
specific mechanisms for learning it automatically, in the absence of explicit
instruction, but that this is not true of writing. There is no evolved,
internal mechanism, in the brain, specifically wired for decoding of written
language, as there is for spoken language. Instead, decoding of written symbols
and associating them with speech sounds requires, usually, explicit
instruction.
Such decoding appropriates general pattern recognition
abilities of the brain and puts them to this particular use. Some few children
are good enough at pattern recognition and get enough exposure to sound-symbol
correspondences to be able to learn to decode in the absence of explicit
instruction in interpretation of those correspondences, but those kids often
don’t develop the automaticity needed for truly fluent reading. There is now no
question about this: There is voluminous research showing that most students
have to be taught phonics (sound-symbol correspondences) explicitly if they are
to learn to decode fluently. For excellent reviews of this research, see Diane
McGuiness’s Early Reading Instruction: What
Science Really Tells
Us about How to
Teach Reading.
Cambridge, MA: Bradford/MIT P., 2004, and Why Our Children Cant Read
and What We
Can Do about It: A Scientific Revolution in Reading.
New York: Touchstone/Simon and Schuster, 1999.

The Look-Say advocates got their
ideas from simplistic Behaviorist models of learning, but where did the Whole
Language people get theirs? Well, from listening at the keyholes of linguists.
As often happens in education, professional educators half heard and half
understood something being said by scientists and applied it in a crazy
fashion. What they half heard was that linguists were saying that language is
learned automatically. The part that they missed is that the linguists were
talking about spoken language, not written.

The upshot: In order to be able to
comprehend texts, there is a prerequisite: automaticity with regard to decoding
of sound-symbol correspondences. Where does one get this automaticity? From
explicit phonics instruction. This is a lesson that we have learned. The
science has caught up with classroom practice, and most elementary schools now
use, successfully, an explicit early phonics curriculum. That’s the good news.
Now for the rest, which is not so good.

Grammatical Fluency

In the last decade of the twentieth
century, the U.S. Department of Education committed billions of dollars to an
initiative called Reading First, with the aim of improving reading among
schoolchildren nationwide. It’s a mark of how scientifically backward and
benighted our professional reading establishment is that when the directors of
this program consulted with “experts” and outlined the areas of focus to be
addressed by Reading First (and assessed by reading examinations), they
included among items to be addressed by the program students’ decoding skills
(phonemic awareness and phonics), vocabulary, and comprehension but completely
ignored grammatical fluency. However, and this ought to be obvious, written
texts consist of sentences that have particular syntactic patterns. If students
cannot automatically—that is, fluently and unconsciously—parse the syntactic
patterns being used, then they might have some idea what the subject of the
text is, but they won’t have a ghost of a chance of understanding what the text
is saying
. Syntactic complexity is a significant determinant of complexity
and readability.
Consider the opening two sentences of the Declaration of
Independence:

Sentence One:

When in the Course
of human events it becomes necessary for one people to dissolve the political
bands which have connected them with one another and to assume among the powers
of the earth, the separate and equal station to which the Laws of Nature and of
Nature’s God entitle them, a decent respect to the opinions of mankind requires
that they should declare which impel them to the separation.

Sentence Two:

We hold these
truths to be self evident, that all men are created equal, that they are
endowed by their Creator with certain unalienable Rights, that among these are
life, Liberty and the pursuit of Happiness—that to secure these rights,
Governments are instituted among Men, deriving their just powers from the consent
of the governed, —that whenever any form of Government becomes destructive of
these ends, it is the Right of the People to alter or to abolish it, and to
institute new Government, laying its foundations on such principles and
organizing its powers in such form, as to them shall seem most likely to effect
their Safety and Happiness.

These sentences contain a few
vocabulary items that might be challenging to young people—impel, endowed,
and unalienable—but for the most part, the words used have high
frequency and present no great challenge. The most significant stumbling block
for comprehension of these sentences is their syntactic complexity. The first
sentence consists of a long adverbial clause, beginning with When in the
Course
and ending with entitle them, that specifies the conditions
under which it is necessary to take the action described in a main clause that
follows it (a decent respect . . . requires). The second sentence
consists of a main clause that introduces a list in the form of five relative
clauses, each specifying a truth held to be self evident. Here’s the point: the
student who can’t follow the basic syntactic form of these sentences will be
completely lost. He or she won’t understand how a given idea in one of these
sentences relates to another idea in them (for that is what syntax does; it relates
ideas in particular ways). An automatic, fluid grasp of the syntax of a
sentence is critical to comprehending what it means.
What’s true of
complicated sentences like these from the Declaration of Independence is true
of sentences in general. One can’t comprehend them if one cannot parse their
syntax automatically (quickly and unconsciously). Grammatical competence is one
of the keys to decoding, and decoding is a prerequisite for comprehension.

So, are schools today ensuring via
their instructional methods and assessments that students are gaining the
automatic syntactic fluency necessary for decoding? Well, no. In fact, they
are implementing materials based on a folk theory of grammar that predates the
current scientific model of language acquisition.
Consider, for example,
this gem from the Common Core State Standards (CCSS) in English Language Arts,
which provide the outline for current instruction in English and reading.
According to the CCSS, an eighth-grade student should be able to

Explain the
function of verbals (gerunds, participles, infinitives) in general and their
function in particular sentences. (CCSS.ELA-Literacy-L.8.1a).

The other grammar-related
“standards” in the CCSS are similar. They all show that at the highest
levels in our educational establishment, there is a complete lack of
understanding of what science now tells us about how the grammar of a language
is acquired.
The standard instantiates a prescientific, folk theory of
grammar that assumes that it is explicitly acquired and is available for
explicit description by someone who knows it (“Explain the function”).

This standard tells us that
students are to be instructed in and assessed on the ability a) to explain the
function of verbals (gerunds, participles, and infinitives) in general and b)
their function in particular sentences. In order for students to be able to do
this, they will have to be taught how to identify gerunds, participles, and
infinitives and how to explain their functions generally and in particular
sentences. In order for the standard to be met, these bits of grammatical
taxonomy will have to be explicitly taught and explicitly learned, for the standard
requires students to be able to make explicit explanations. Now, there is a
difference between having learned an explicit grammatical taxonomy and having
acquired competence in using the grammatical forms listed in that taxonomy. The
authors of the standard seem not to have understood this.

Let’s think about the kind of
activity that this standard envisions our having students do. Identifying the
functions of verbals in sentences would require students to be able to do,
among other things, something like this:

Underline the
gerund phrases in the following sentences and tell whether each is functioning
as a subject, direct object, indirect object, object of a preposition,
predicate nominative, retained object, subjective complement, objective
complement, or appositive of any of these.

That’s what’s entailed by PART of
the standard. And since the standard just mentions verbals generally and not
any of the many forms that these can take, one doesn’t know whether it covers,
for example, infintives used without the infinitive marker “to,” so-called
“bare infinitives,” as in “Let there be peace.” (Compare “John wanted there to
be peace.”) Obviously, meeting this ONE standard would require YEARS of
explicit, formal instruction in syntax, and what contemporary linguistic
science teaches us is that all of that instruction would be completely
irrelevant to students being able to formulate and comprehend sentences.

Contemporary linguistic science
teaches that the grammar of a language is learned not through explicit
instruction in grammatical forms but, rather, automatically (fluently and
unconsciously) via the operation of an internal mechanism dedicated to such
learning. Permit me an example. If you are a native speaker of English, you know
that

the green, great dragon

“sounds weird” (e.g., is
ungrammatical) and that

the great, green
dragon

“sounds fine” (e.g., is grammatical).

That’s because, based on the
ambient linguistic environment in which you came of age, you intuited,
automatically, without your being aware that you were doing so, a complex set
of rules governing the proper syntax of adjectives in a series. No one
taught you, explicitly, these rules governing the order of precedence of
adjectives in English, and the chances are that you cannot even state the rules
that you nonetheless know.
And what’s true of this set of rules is true of
all but a miniscule portion of the grammar of a language that a speaker “knows”—that
he or she can use. Knowledge of grammar is like knowledge of how to walk. It is
not conscious knowledge. The walker did not learn to do so by studying the
physics of motion and the operation of motor neurons, bones, and muscles. The
brain and body are designed in such a way as to do these things automatically.
The same is true, contemporary linguistic science teaches us, of the learning
of the grammar of a language. Speakers and writers of English follow hundreds
of thousands of rules, such as the C-command condition on the binding of
anaphors (a key component of the syntax of languages worldwide), that they know
nothing about explicitly. Following this rule, they will say that “The
president may blame himself” but will never say “Supporters of the president
may blame himself,”[3]
which violates the rule, even though they were never taught the rule explicitly
and could not explain, unless they have had an introductory Syntax course, what
the rule is that they have been following all their lives. Since the
ground-breaking work by Noam Chomsky in the 1950s, we have over the past sixty
years developed a robust scientific model of how the grammar of a language is
acquired. It is acquired unconsciously and automatically by an internal
language acquisition mechanism.

Like many great thinkers, Chomsky
started with a simple question, asking himself how it is possible that most
children gain a reasonable degree of mastery over something as complicated as a
spoken language. With almost no direct instruction, almost every child learns,
within a few years’ time, enough of his or her language to be able to
communicate with ease most of what he or she wishes to communicate. This
learning seems not to be correlated with the child’s general intelligence and
fails to occur only when there is a physical problem with the child’s brain or
in conditions of extreme deprivation in which the child has limited exposure to
language. If one looks scientifically at what a child knows of his or her
language at the age of, say, six or seven, it turns out that that knowledge is
extraordinarily complex. Furthermore, almost all of what the child knows has
not been directly and explicitly taught. For example, long before going to
school and without being taught what direct objects and objects of prepositions
are, an English- speaking child understands that the first two sentences,
below, “sound right” and that the second two sentences do not.

Jose threw the
football.

The football
landed in the neighbor’s yard.

* The football
threw Jose.

* Landed the
football the yard neighbor’s in.

In other words, on some level, the
English-speaking child “knows” that objects follow (and do not precede) the
verbs and prepositions that govern them, even though he or she has no clue what
objects, verbs, and prepositions are. The Japanese child, in contrast, “knows”
just as well that in Japanese objects precede (and do not follow) the verbs and
prepositions that govern them. So, imagine that the sentences, above, were
translated word-by-word into Japanese and that the word order were retained. To
a Japanese child, the word order of the first two sentences above would sound quite
strange, while the word order of the second two sentences would be
unexceptional—just the opposite from English. English is a head first language,
in which the head of a grammatical phrase precedes its objects and complements.
Japanese is a head-last language, in which the head of a grammatical phrase
follows its objects and complements. Kids are not taught this. They are born
with part of the grammar (the fact that there are heads, objects, and
complements, for example) already hard wired into their heads. Then, based on
their ambient linguistic environments, they automatically set certain
parameters of the hard-wired internal grammar, such as head position. Children
do not learn such rules by being taught them any more than a whale learns to
echolocate by attending echolocation classes.

Chomsky’s central insight was that
in order for a child to be able to learn a spoken language with such rapidity
and thoroughness, that child must be born with large portions of a universal
grammar of language already hardwired into his or her head. So, for example,
the neural mechanisms that provide for classification of items from the stream
of speech into verbs and prepositions and objects, and those mechanisms that
allow verbs and prepositions to govern their objects, are inborn. They are part
of the equipment with which human children come into the world. Then, when a
child hears a particular language, English or Japanese, for example, certain
parameters of the inborn language mechanism, such as the position of objects
with respect to their governors, are set by a completely unconscious, autonomic
process that is itself part of the innate neural machinery for language
learning.

Because the learning of a grammar
is done automatically and unconsciously by the brain, explicit instruction
in grammatical forms of the kind called for by the Common Core State Standard
quoted above is irrelevant. And, in fact, such instruction is most likely going
to get in the way,
much as if one tried to teach a child to walk by making
him or her memorize the names of the relevant muscles, nerves, and skeletal
structures or tried to teach a baseball player how to hit by teaching him or
her calculus to describe the aerodynamics of baseballs in motion. In other
words, the national standard is based on a prescientific understanding of how
grammar is acquired. This should be a national scandal. It’s as though we had
new standards for tactics for the U.S. Navy that warned against the possibility
of sailing off the edge of the earth.

To return to the main topic, we
have seen, above, that grammatical fluency and automaticity is an essential
prerequisite to reading comprehension. So, if such fluency and automaticity is
not gained via explicit instruction, how is it to be acquired? The answer is
quite simple: The child has to be exposed to an ambient linguistic
environment containing increasingly complex syntactic structures so that the
language acquisition device in the brain has the material on which to work to
put together a model of the language.

So, why do some kids have, early
on, a great deal of syntactic competence while other kids do not? The answer
should be obvious from the foregoing. Some were raised in syntactically rich
linguistic environments, and some were not. In 2003, Betty Hart and Todd R.
Risley of the University of Kansas published a study showing that students from
low-income families were exposed, before the age of three, to 30 million fewer
words (to a lot less language) than were students from high-income families,
and that the language to which they were exposed was extremely syntactically
impoverished.[4]

Shockingly, however, what reading
comprehension people commonly do in their classrooms mirrors what happens to
kids from impoverished families and is precisely the opposite of what is
required by the language acquisition device, or LAD. Instead of providing
syntactically complex materials as part of the child’s ambient linguistic
environment so that the LAD can “learn” those forms automatically and
incorporate them into the child’s working syntax, reading “professionals” intentionally
use with children what are known as levelled readers. These intentionally
contain short (and thus, usually, syntactically impoverished) sentences that
will come out “at grade level” according to simplistic (and simple-minded)
“readability formulas” like Lexile and Flesch-Kincaid. The readability formulas
used to “level” the texts put before children vary in minor details, but almost
all are based on sentence length and word frequency (how frequently the words used
in the text occur in some language collection known as a corpus). Shorter
sentences are, of course, statistically likely to be syntactically simple. So,
as a direct result of the method of text selection, complex syntactic forms
are, de facto, banished from textbooks and other reading materials used
in reading classes. Teachers go off to education schools to take their master’s
degrees and doctorates in reading, where they learn to use such formulas to
ensure that reading is “on grade level,” and by using such formulas, they
inadvertently deprive kids of precisely the material that they need to be
exposed to in order for their LADs to do their work. After years of exposure to
nothing but texts that have been intentionally syntactically impoverished, the
students have not developed the necessary syntactic fluency for adult reading.

When confronted with real-world texts, with their embedded relative and
subordinate clauses, verbal phrases, appositives, absolute constructions,
correlative constructions, and so on, they can’t make heads or tails of what is
being said because the sentences are syntactically opaque. A sentence from the
Declaration of Independence, The Scarlet Letter, a legal document, or a
technical manual might as well be written in Swahili or Linear B.

What can be done to ensure that
students develop syntactic fluency? I am not suggesting that students be given
texts too difficult for them to comprehend, obviously. I am saying that they
must be given texts that are challenging syntactically—that present them with
syntactic forms that they cannot, at their stage of development, parse
automatically, for it is only by this means that the innate grammar-learning
mechanism can operate to expand the student’s syntactic range. Here are a few
techniques: In conversation with students, use syntactically complex language. Present
them with texts that are routinely just above their current level of syntactic
decoding ability. Have them listen to syntactically complex texts (because
syntactic decoding of spoken language outpaces syntactic decoding of written
language). Have them memorize passages containing complex syntactic constructions.
Have them do sentence combining and sentence expansion exercises. And most of
all, as soon as they can begin to do so, with difficulty, have them read
real-world materials—novels and essays and nonfiction books that have NOT been
leveled but that are high interest enough to repay their effort. That such
materials will contain difficult-to-parse constructions is precisely the point.
Those are the materials on which the LAD works to acquire internal grammatical
competence.

Vocabulary and World Knowledge

So, with regard to the grammatical
fluency component of reading comprehension, the state of our pedagogy is
abysmal. We have things precisely backward. In the whole language days, we
avoided explicit instruction in phonics when it was precisely explicit
instruction that was required by the inadequacy of the internal
language-learning mechanism with regard to the task of interpreting
sound-symbol correspondences. Today, we do explicit instruction in grammar,
when the internal language-learning mechanism is set up to learn grammar
automatically, without explicit instruction.

Are things any better with regard
to the vocabulary component of comprehension? Sadly, no. The most common way in
which vocabulary instruction is approached in the United States today is by
giving students a list of “difficult” (low-frequency) words taken from a
selection. So, for example, a student might be assigned the reading of Chapter
1 of Wuthering Heights and be given this list of words from the chapter:



Causeway

Deuce

Ejaculation

Gaudily

Laconic

Manifestation

Misanthropist

Morose

Peevish

Penetralia

Perseverance

Phlegm

Physiognomy

Prudential

Reserve

Signet

Slovenly

Soliloquise

Vis-à-vis



Students are then asked to look the
words up in the dictionary or in a glossary, define them, write sentences using
them, and memorize them for a vocabulary quiz. As with grammar, the preferred
approach involves explicit instruction.

Now, the thing that should strike
you, in looking at that list, taken, as it is, out of the context of the novel,
is that they might as well be words taken at random. The task facing the
student is quite similar to memorizing a random list of telephone numbers.

Other commonly used instructional
techniques include teaching students to do word analysis by having them
memorize Greek and Latin prefixes, suffixes and roots and teaching them to use
context clues such as examples, synonyms, antonyms, and definitions.

Again, these instructional approaches
fly in the face of the established science of language acquisition. We now
know, because linguists have studied this, that almost all of the vocabulary
that an adult uses (active vocabulary) and understands (passive vocabulary) is
learned unconsciously, without explicit instruction.
Far less than one
percent
of adult vocabulary has been acquired by direct, explicit
instruction because direct, explicit instruction is not the means by which
vocabulary is acquired. As with grammar, there is a way in which the
language-learning mechanism in the brain is set up to learn vocabulary, and
that way is not via explicit instruction. So, how do people learn vocabulary? A
person takes a painting class at the Y. In the course of the coming weeks, the
people around him or her use, in that class, terms like gesso, chiaroscuro,
stippling, filbert brush, titanium white,
and so on, and, in the absence of
explicit instruction, the speaker picks the words up because people’s brains
are built to acquire vocabulary automatically in semantic networks in
meaningful contexts.
Vocabulary is a variety of world knowledge, and like
other world knowledge, it is added, incidentally, to the network of knowledge
that one has about a context in which it was actually used. For vocabulary to
be acquired and retained, it has to be learned in the context of other
vocabulary and world-knowledge having to do with a particular domain. Human
brains are connection machines. Knowledge is easily acquired and retained if
it is connected to existing knowledge.
The message for educators is clear: If
you want students to learn vocabulary, skip the explicit vocabulary instruction
and concentrate, instead, on extended exposure to knowledge in particular
domains and enable the students to acquire, in context, the vocabulary native
to that domain. The focus has to be on the knowledge domain—on turtles or Egypt
or 19th-century Romanticism or whatever—and
the vocabulary
has to be learned incidentally and in batches of semantically related terms
because that is how vocabulary is actually learned. It’s how the brain is set
up to learn new words
. As it stands now, students are subjected to many, many
thousands of hours of explicit instruction in random vocabulary items, with the
result that far less than one percent of the vocabulary that they actually
learn was acquired by this means. The opportunity cost of this heedless
approach is staggering.

My teachers should
have ridden with Jesse James

For all the time
they stole from me.

–Richard
Brautigan

With regard to the vocabulary from Wuthering
Heights,
teachers are well advised to read with the kids and stop, from
time to time, to clarify the meaning of a word in its immediate context but to
skip the list and its attendant fruitless pedagogical activities.

Kairos and World Knowledge

Texts exist in context. If someone
says, “We need to tie up the loose ends here,” it makes a difference whether
the speaker is a macramé instructor or Tony Soprano. Is the statement about
pieces of string or about a mob hit? The context matters. Comprehending the
sentence—understanding what it means—depends crucially on the context in which
it is uttered. The same is true of almost all language.

The ancient Greeks used the term kairos
to refer to a speaker’s sensitivity to his or her audience, to the occasion,
and to the immediate context of the utterance. I’ll be using it, here, in a
slightly expanded sense to refer to all the extra-textual stuff that goes into
understanding a text. For years, reading comprehension teachers have been told
to begin the reading of a text by “activating their student’s prior knowledge.”
Millions of teachers dutifully learned this “strategy” and attempted to apply
it in their classrooms even though a moment’s reflection would have revealed it
to be completely absurd. If a student already has the relevant
background knowledge to understand a text, then it will not need to be
“activated.” It will simply be there. And if the student does NOT have the
relevant background knowledge, no amount of having students tell what they
already know will supply it. That said, and here we have yet another example of
educators half hearing what scientists have been saying, cognitive scientists
like Daniel Willingham of the University of Virginia and the education theorist
E.D. Hirsch, Jr., have shown beyond any reasonable doubt that background
knowledge—what the writer assumes that the reader already knows—is one of the
great keys to reading comprehension. Let’s consider an example. My students’ eleventh-grade
literature textbook contains a passage about how Arthur Miller wrote The
Crucible
in reaction to the Army-McCarthy hearings of 1954. The passage
mentions people being hauled before the Senate Subcommittee on Investigations
and being accused of being Communists. It goes on to say that Miller was
concerned by the hysteria and guilt-by-association attendant to these hearings
and wrote the play to show how the same sort of thing occurred during the Salem
Witch Trials of 1692. Now, if the students reading that do not know the
background—what a Communist is; that the United States is a Capitalist country;
that Communism and Capitalism are antagonistic; that in the 1950s, the United
States was involved in a Cold War with the Communist Soviet Union; that the
Soviet Union had vowed to bring down the American system; that certain Senators
and Congressmen in the 1950s were concerned about Communist infiltration of the
media, the government, and the armed services; what a subcommittee is; what a
hearing is; and that the Subcommittee on Investigations attempted to identify
Communist sympathizers, then the passage in the text will be meaningless. In
short, comprehension depends critically on world knowledge. Without
the relevant background knowledge, comprehension cannot occur.
A student
cannot read Milton or Dante with comprehension if he or she is ignorant of the
Bible and won’t comprehend the title of George Bernard Shaw’s play about
Professor Higgins and Liza if he or she is ignorant of the Greek myth in which
Pygmalion falls in love with a statue. Consider this opaque text from Dylan
Thomas:

The twelve
triangles of the cherub wind

This impenetrable text becomes crystal
clear when one realizes that Thomas is referring to old maps that represented
the winds as cherubs whose breath—the winds—inscribed triangles across the
maps. One can’t begin to understand Plato’s allegory of the cave without
understanding that he was highly influenced by Greek mathematics, recognized
that perfect forms (like a point or a perfect triangle) did not exist in the
world but did exist in the mind, used a single word (psyche) for both
mind and spirit, and thus thought of anything perfect (and thus, he thought,
good) as existing in a separate, spiritual plane that could be accessed through
mental/spiritual activity. One has to have a lot of information about the
background—the concepts available to Plato and what he was concerned with—to
make any sense at all of his bizarre little story. Domains of knowledge, from
auto mechanics to the growing of orchids to theodicy and dirigible driving all
have their associated vocabulary—not just jargon but words and phrases that
appear with particular frequency and particular meaning within them. And
knowledge of this vocabulary is not a matter of possession of a bunch of
definitions taken in the abstract but, rather, possession of an understanding
of how those words and phrases are used and in what contexts within the
relevant domain. Life coaches and physicists use the word potential in
related but distinct ways and about different objects. Understanding what is
meant by the word, in a text, requires, in addition to knowledge of its
definition, knowledge of how it is used in the subset of the world that is the
knowledge domain of the text. Furthermore, the ability to use a term actively
involves mastering not only its definition but also its inflected and
derivative forms, something that is learned not through explicit, rote study
but through use in context. A student hasn’t really learned the word imply
unless he or she can properly use such inflected forms as implying, implied,
and implies as well as such derivative forms as implication,
and one learns those forms, really learns them, only through repeated use in a
context. Simply memorizing the definition for a test is a recipe for
forgetting.

A Summary of the Prerequisites
for Reading Comprehension

Decoding ability—phonics and
grammatical fluency—are, of course, prerequisites to comprehension. They must
be mastered before comprehension is possible. The same is true of
domain-specific world knowledge—knowledge about Communists, the Bible, Greek
myths, old maps, geometrical forms, or whatever it is that the author is taking
for granted that the reader knows, including the vocabulary used in that
context, or domain. E.D. Hirsch, Jr., has written eloquently and persuasively
on precisely this subject in numerous works, including The Schools We Need
and The Knowledge Deficit. The reader is referred to those works for
further information. Suffice it to say that the teacher must ensure that
students have the relevant background knowledge, including domain-specific
vocabulary, to understand what they are being asked to read and that
instruction should be focused on extended time spent in particular knowledge
domains so that students can build the bodies of knowledge that they need for
comprehending texts in the future. New knowledge needs a hook to hang on. That
hook is other knowledge in the relevant knowledge domain.

That ought to be obvious. However,
today, reading instruction has devolved into isolated practice of
“comprehension strategies” using short texts taken absolutely at random
—a
snippet of text here on invasive species, a snippet of text there on Harriet
Tubman. But brains are built to acquire knowledge in connected networks, and
the hundreds of thousands of hours spent doing this practice of strategies
applied to random, isolated texts is time completely wasted. More about that
later.

So, knowledge is essential to
comprehension. But there are other extra-textual matters—other parts of the
overall kairos of the text—that are also essential. Among these are genre and a
whole raft of conventions of usage—idioms, transitional devices and turns, figures
of speech, rhetorical techniques, manuscript formatting, and so on. So, for
example, comprehending a text by Sir Phillip Sidney or one of William Blake’s Songs
of Innocence
may require knowledge of the conventions of the genre of
pastoral—that lambs represent innocence, that shepherds are uncorrupted by city
life, that spring represents youth and rebirth, and so on. The semantic
component of language is highly conventional. Understanding a poem often
requires familiarity with conventional symbol systems like that which relates
the cycles of the seasons to the life cycle (spring/youth, summer/maturity,
autumn/age, winter/death). One has to know the convention, or one is lost.

Given all this, one would expect
that reading comprehension instructors were devoting their time to a) ensuring
that students have automaticity in phonetic and grammatical decoding, building
vocabulary and world knowledge through extended work in critical knowledge
domains, and acquainting students with the conventions of various genres and
the primary literary and rhetorical conventions. After all, that’s what reading
comprehension requires. But if you made such an assumption, you would be wrong.
What reading comprehension teachers are doing, instead, is spending their time
teaching “reading strategies.”

The Devolution of Reading
Comprehension Instruction and Assessment

Back in 1984, Palinscar and Brown
wrote a highly influential paper about something they called “reciprocal
learning.”[5]
They suggested, in that paper, that teachers conducting reading circles
encourage dialogue about texts by having students do prediction, ask questions,
clarify the text, and summarize. Excellent advice. But this little paper had an
enormously detrimental unintended effect on the professional education
community. All groups are naturally protective of their own turf. The paper by
Palinscar and Brown had handed the professional education community a
definition of their turf: You see, we do, after all, have a unique,
respectable, scientific field of our own that justifies our existence—we are
the keepers of “strategies” for learning. The reading community, in particular,
embraced this notion wholeheartedly. Reading comprehension instruction
became MOSTLY about teaching reading strategies,
and an industry for
identifying reading strategies and teaching those emerged. The vast, complex
field of reading comprehension was narrowed to a few precepts: teach kids

to identify the
main idea and supporting details,

to identify
sequences,

to identify cause
and effect relationships,

to make
predictions,

to make
inferences,

to use context
clues,

to identify text
elements.

In the real world, outside school, a
strategy is a broad approach to accomplishing a goal. In EdSpeak,
weirdly, a strategy is any particular thing, whatsoever, that one might
do to advance toward a goal (what in the real world would be called a tactic). It’s
typical of people in education to use words imprecisely, like this—to borrow a
term and then use it improperly. Consider the term benchmark. In the
real world, a benchmark is a standard reflecting the highest performance
in a given industry. So, for example, the highest read-write time for a disc
drive achieved by any manufacturer is a benchmark, or goal, for other disc
drive manufacturers to meet. In education, a benchmark is any sort of
interim evaluation. Educators confused the goal (the benchmark) with the method
of evaluating achievement of the goal. Education schools are bastions of such
sloppy thinking—confusion of means and ends, misapplication of concepts, and so
on.

Throughout American K-12 education,
in the late 1980s, we started seeing curriculum materials organized around
teaching some variant of the list of “reading comprehension strategies” given
above. Where before a student might do a lesson on reading Robert Frost’s
“Stopping by Woods on a Snowy Evening,” he or she would now do a lesson on
Making Predictions, and any random snippet of text that contained some examples
of predictions, as long as it was “on grade level,” would be a worthy object of
study.

One problem with working at such a
high level of abstraction—of having our lessons be about, say, “making
inferences,” is that the abstraction reifies, it hypostasizes. It combines
apples and shoelaces and football teams under a single term and creates a false
belief that some particular thing—not an enormous range of disparate
phenomena—is referred to by the abstraction. In the years after Palinscar and
Brown’s paper, educational publishers produced hundreds of thousands of lessons
on “Making Inferences,” and one can look through all of them, in vain, for any
sign of awareness on the part of the lessons’ creators that inference is
enormously varied and that “making proper inferences” involves an enormous
amount of learning that is specific to inferences of different kinds. There
are, in fact, whole sciences devoted to the various types of
inference—deduction, induction, and abduction—and whole sciences devoted to
specific problems within each. The question of how to “make an inference” is
extraordinarily complex, and a great deal human attention has been given to it
over the centuries, and a quick glance at any of the hundreds of thousands of
Making Inferences lessons in our textbooks and in papers about reading
strategies by education professors will reveal that almost nothing of what is
actually known about this subject has found its way into our instruction. If
professional educators were really interested in teaching their students how to
“make inferences,” then they would, themselves, take the trouble to learn some
propositional and predicate logic so that they would understand what deductive
inference is about. They would have taken the trouble to learn some basic
probability and techniques for hypothesis testing so that they would understand
the tools of inductive and abductive inference. But they haven’t done this
because it’s difficult, and so, when they write their papers and create their
lessons about “making inferences,” they are doing this in blissful ignorance of
what making inferences really means and, importantly, of the key
concepts that would be useful for students to know about making inferences that
are reasonable. This is but one example of how, over the past few decades, a
façade, a veneer of scientific respectability has been erected in the field of
“English language arts” that has precious little real value.

I bring up the issue of instruction
in making inferences in order to make a more general point—the professional
education establishment, and especially that part of it that concerns itself
with English language arts and reading instruction, has retreated into dealing
in poorly conceived generalization and abstraction. Reading comprehension
instruction, in particular, has DEVOLVED into the teaching of reading
strategies, and those strategies are not much more than puffery and vagueness. To
borrow Gertrude Stein’s phrase, is no there there. No kid walks away from his
or her Making Inferences lesson with any substantive learning, with any world
knowledge or concept or set of procedures that can actually be applied in order
to determine what kind of inference a particular one is and whether that
inference is reasonable. A kid does not learn, for example, that some
approaches to making inductive inferences include looking at historical
frequency, analyzing propensities, making systematic observations and tallies,
calculating the probabilities, conducting a A/B split survey, holding a focus
group, and performing a Gedankenexperiment or an actual experiment
involving a control group and an experimental group. Why? Because one has to
learn and teach a lot of complex material in order to do these things at all,
and professional education folks have decided, oddly, that they can teach
making inferences without, themselves, learning about what kinds of inferences
there are, how one can make them rationally, and how one evaluates the various
kinds.

Though reading comprehension
instruction has now almost completely devolved into the teaching of “reading
comprehension strategies,” those strategies do not, themselves, hold up to
close inspection. They all exemplify a fundamental kind of error that
philosophers call a “category error”–a mistaken belief that one kind of thing
is like and so belongs in the same category as some completely different kind
of thing. Reading comprehension “specialists” now speak of “learning
comprehension strategies,” as though recognizing the main idea, making
inferences, and so on, were discrete, monolithic, invariant skills that one can
learn, akin to learning how to sew on a button or learning how to carry a
number when adding, but the “reading comprehension strategies” are nothing like
that. Being able to identify the main idea of a given piece of prose, poetry,
or drama is NOT a general, universally applicable procedural skill like being
able to carry out the standard algorithm for multiplying multi-digit numbers,
and placing them into the same category (“skill”) is an example of the logical
fallacy known as a category error.

When you see a list of the
skills to be tested for math and the skills to be tested in ELA–that is, when
you look at a list of “standards”–it
’s important that you understand
that there are very, very different KINDS of thing on those lists, the one for math
and the one for English. These are not only as different as are apples and
oranges, they are as different as an apple is from a hope for the future or the
pattern of freckles on Socrates’s forehead or the square root of negative one.
They are different sorts of thing ALTOGETHER because the math skills are
discrete, monolithic, and invariant, and those “reading strategies” are not.

Let me illustrate the point about
“the main idea.”

What is the main idea of the
following?

The ready to hand
is encountered within the world. The Being of this entity, readiness to hand,
thus stands in some ontological relationship toward the world and toward
worldhood. In anything ready-to-hand, the world is always ‘there.’ Whenever we
encounter anything, the world has already been previously discovered, though
not thematically.[6]

Now, note that this
passage would have a pretty low Lexile level. It doesn’t contain a lot of
difficult (long, complicated, low-frequency) words, and the difficult words (ontological,
ready-to-hand, thematically
) can be explained. It doesn’t contain long or
syntactically complicated sentences. But the chances are good that unless you
are familiar with continental philosophy, you will have NO CLUE WHATSOEVER what
this passage is saying. In order for you to understand the main idea of this
passage, you would need, at a minimum, an introduction to the philosophical
problems that Heidegger is addressing in the passage. In other words, you would
have to have a lot of world knowledge about continental philosophy. Otherwise,
the passage will be impenetrable to you. No general “finding-the-main-idea
skill” will help you to make sense of the passage.

Now, what is the main idea of the
following?

One of the limits
of reality

Presents itself in Oley when they hay,

Baked through long days, is piled in mows. It is

A land too ripe for enigmas, too serene.

There the distant fails the clairvoyant eye.

Things stop in
that direction and since they stop

The direction stops and we accept what is

As good. The utmost must be good and is

And is our fortune and honey hived in the trees

And mingling of colors at a festival.[7]

Again, the language is not that
difficult. One can easily define the less frequent words–enigmas, serene,
clairvoyant,
and utmost. But that’s not going to help you figure out
what the “main idea” here is. For that, the royal road is an introduction to
the kinds of concerns that Wallace Stevens took up in his poetry. If you know
from his other work that Stevens wrote, time and time again, about the failure
of our abstractions to account for the concrete facts of the world and about
our tendency to live in our abstractions rather than in the real world, if you
know that Stevens was distrustful of abstractions of all kinds—religious, political,
philosophical, and so on, then the passage will make sense to you. If you
don’t, well, good luck.

Now, notice that what is involved
in figuring out the main idea of each of these passages is entirely different. Oh,
sure, there are similarities between the passages. Both are passages in
English. Both deal with a philosophical idea. Actually, they both deal with the
same philosophical idea. But in the one case, to grasp the main idea, you have
to be familiar with a lot of Continental philosophy and with the kinds of
problems that such philosophy addresses. In the other, you need to be able to
recognize that Stevens is revisiting what is, for him, a recurring theme.

This is the key point: there is no
one procedure–no one finding the main idea procedure–that I can teach you that
will enable to you to determine what each passage, and any other passage taken
at random from a piece of writing, means.

In other words, no instruction in
some general finding-the-main-idea skill is going to help you, usually, to find
the main idea. There’s a reason for that: THERE IS NO “general finding the
main idea skill.” That such a thing exists is an UTTER FICTION. The “general
finding-the-main-idea skill” is as fictional as one of Sir Arthur Conan Doyle’s
fairies.

Finding the main idea is context
dependent in a way that adding multi-digit numbers isn’t. It makes no
difference whether you are adding 462 and 23 or 1842 and 748, you are going to
follow the same procedure and draw upon the same class of facts. Not so with
“finding the main idea.” There is no magical procedure for main idea finding
that applies to all texts and scoots around the necessity of engaging with a
particular text—decoding it, parsing it, applying world knowledge to understanding
it, recognizing its context and conventions, and then generalizing about it or
recognizing that some statement within it encapsulates that idea.
Sure,
occasionally one will encounter, in the real world, a puerile piece of writing
of the five-paragraph theme variety that states its main idea in an
introduction or conclusion. One can teach students, for a few limited types of
texts, to do that. But most texts in the real world contain no such idea
readily identifiable using a particular procedure. Hedda Gabler contains
no thesis statement. Neither does the Gettysburg Address or The Diagnostic
and Statistical Manual of Mental Disorders.
Determining what the main idea
of each text is a tall order and requires, for each, a unique procedure.
Schooling in reading comprehension should teach kids to make sense of
real-world writing, and writing in the real world does not take the form of
five-paragraph themes with the main idea neatly tucked in at the conclusion of
the introductory paragraph. And as long as we continue cooking up pieces of
fake writing for use in exercises on finding the main idea in those, we are
perpetuating a myth.

And, of course, attempting to test
for possession of this mythical general finding-the-main-idea faculty that is being
magically transmitted to students via “reading comprehension instruction” is
like requiring people to bag and bring home any other sort of mythical entity–a
unicorn, Pegasus, or the golden apples from the tree at the edge of the world.

Now, how is it that people have
come to believe that there is exists a “general finding-the-main-idea skill”?
Well, they have committed a category error. They’ve made the mistake of
thinking that figuring out what’s happening in a piece of poetry or prose is a
discrete, specific, universally applicable general skill like the ability to
carry out the algorithm for adding multi-digit numbers.

It’s not.

And, of course, since there exists
no “general finding-the-main-idea skill,” there can exist no valid test of
general finding-the-main-idea skill. The tests, like the curricula, have to be
cooked to contain passages that work with the formula that the reading
comprehension teachers teach—look for the thesis statement in the introduction
and in the conclusion or the topic sentence in the paragraph.  For
brevity, let’s look at just the latter.

In 1866, Alexander Bain published his English
Composition and Rhetoric: a Manual,[8]
the great grandfather of the writing
textbooks of today. It was Bain who first characterized the paragraph
as school texts have ever since, as a group of sentences related to or supporting a single topic sentence and characterized by unity and coherence. Here we have a classic
categorical definition. The set of paragraphs has these essential, or defining, characteristics:

 

·
Possession of a topic sentence

·
Possession of a number of sentences related to or supporting the topic sentence

·
Unity

·
Coherence

Building on this definition, a school text might provide the following
heuristic for writing a paragraph: “State a general idea. Then back it up with specific
details (or examples
or instances). Make sure not to include any unrelated ideas, and make sure to make the connections among your ideas clear by using transitions.”

 

Of course, individual paragraphs
in the real world simply do not fit the standard textbook
definition, though that definition has been repeated with only minor variation ever since Bain.

Most pieces of writing and, ipso facto, most paragraphs, are narrative, and rarely does a narrative
paragraph have a topic sentence. Narrative paragraphs are typically just one darned thing after another. Two of the most common types of paragraphs, those that make up newspaper
articles and those that present dialogue in stories,
typically contain only one or two sentences, and a paragraph in dialogue
can be as short as a grunt or an exhalation. And, of course,
it makes little sense to speak of a sentence or fragment
as being unified or coherent in the senses in which those terms are usually used when describing paragraphs.
Of all the types of writing that exist, only nonfiction academic prose contains
paragraphs that frequently contain topic sentences, and in such prose, only
about half the paragraphs do.[9]

 

The fact is that the traditional schoolroom definition of a paragraph describes the fairly rare case in which a single general
main idea is illustrated by specifics.
Of course, few paragraphs in the real world work that way. Throw a dart at a page in Harper’s
magazine. You will not hit a Bain-style paragraph. There are many, many other ways to put several sentences together sensibly. The narrative
way is the simplest: Present
one darned thing after another.
But one can also write quite an effective
paragraph that, for example, consists of a thesis, an antithesis, and a synthesis; such a paragraph comes to a conclusion but has no overall
main idea in any reasonable sense of the term “main idea.” Many well-crafted nonnarrative paragraphs depart radically from the schoolbook model, having no overall,
paragraph-level
organizational scheme but, rather,
only a part-by-part organization in which each sentence
is connected to the one before it and to the one after it in any of a myriad ways. In such cases, the writer often begins a new paragraph only because he or she has run out one head of steam. Whew! The study of these part-by-part connections that hold ideas together
is sometimes referred
to as discourse analysis.

 

Because
of the variety of that exists in paragraphs in real-world writing, no general
finding-the-main-idea skill for paragraphs exists. The same is true for other
kinds of writing—for poems, essays, memoirs, oral histories, plays, technical
manuals, nonfiction books, sermons, and so on. Writing is too various, and the
reading comprehension strategy is far, far too specific, akin to determining
whether a given entity is a human being by asking if his or her name is Bob.

Teachers would be disabused of
their notion that paragraphs typically contain topic sentences that state “the
main idea” if they actually studied paragraphs in the wild, as linguists
sometimes do.

That there exists no general,
universally or even often applicable finding-the-main-idea skill does not mean,
of course, that there don’t exist some passages, in written works, that have
main ideas and some subset of those in which the main ideas are stated
explicitly. But it does mean that there is no essent
ial characteristic
that all main ideas or passages or paragraphs that have main ideas have and
that there is no general ability to find main ideas, the possession of which
can be tested for reliably across all texts at grade level.
One has to be
an educator, living in a world of cooked, formulaic, schoolroom writing, to
believe in such a mythical entity.

Again, treating basic mathematics
and ELA as though they were the same sort of thing, with skills that can be
similarly enumerated and tested, is a fundamental mistake of the kind that
philosophers call a category error. Worse yet, it’s a fundamental category
error on which our lists of standards, our summative standardized tests, and
our district-level interim standardized tests in ELA are all based!

Correcting this error would mean
completely redoing what we are doing in a way that operationalizes our very
vague, ill-formulated notions like “finding the mean idea” or “making
inferences/drawing conclusions.” But operationalizing them would be impossible
because no general set of operations can be delineated.

But that’s not the biggest problem
with the reading comprehension strategies approach to teaching reading. As we
saw above, comprehending a text requires being able to decode its symbols
(phonics); being able, automatically and fluently, to parse its syntax; having
knowledge of the vocabulary in the text and of the referents and uses of that
vocabulary in specific, real-world knowledge domains; having the requisite
background knowledge assumed by the author; and being familiar with the broad
range of extra-textual, contextual determinants of meaning that include
conventions of genre, particular interests and concerns that the author had,
the occasion and audience to which the text responds, and idiomatic,
figurative, and rhetorical usages. Ignoring all these in order to teach
“reading comprehension strategies,” day in and day out, is like teaching a
class in sailing that concentrates, entirely, on methods for polishing the
bright work and folding the sails. The important stuff isn’t even dealt with.

The current standards-and-testing
regime has acerbated this problem, for the new standards consist almost
entirely of vaguely conceived abstractions like the so-called “reading
strategies.” Our students spend their days doing test practice questions based
on isolated, random snippets of text and applying vaguely conceived strategies
to answer those questions, and the real determinants of reading comprehension are
not addressed. The opportunity cost of all this is, of course, quite high—our
students fail to learn to read well because the actual determinants of
comprehension have not, in their education, been addressed. There is no
magic bullet—no list of strategies and standards to be mastered via practice
exercises—that will substitute for learning to decode; internalizing a robust,
sophisticated grammar; acquiring specific domain knowledge; becoming familiar
with text conventions; and reading a lot of whole texts—the actual determinants
of the ability to comprehend what one reads.

What Reading Comprehension
Instruction and Astrology Have in Common

Permit me another analogy. Before
there was science, there was magic. Astrology preceded astronomy. The problem
with astrology, of course, is that it had no causal mechanism. It was
absolutely prescientific. Current approaches to reading comprehension instruction
are just like that. Both are performed in ignorance of what science teaches us
about causal mechanisms, in the one case with regard to the stars and to
determinants of human personality and fortunes, in the other with regard to how
children acquire the ability to comprehend texts. We now have actual sciences
dealing with human personality and fortunes—psychology and economics. We also
have actual sciences dealing with how people learn to comprehend
language—linguistics and cognitive science. These last two sciences have taught
us a lot about how kids actually learn to make sense of language, and one day,
perhaps, our reading teachers, curriculum designers, test designers, and policy
makers will learn some of what science has taught us, in the past sixty years,
about language. Until then, we shall be in the Dark Ages and might as well replace
our classwork—all those practice exercises on reading comprehension
skills–with drawing sigils and muttering magical incantations.

 


[1]
Augustine, Confessions. Quoted in Wittgenstein, Ludwig. Philosophical
Investigations. 4th ed. Trans. G.E.M. Anscombe, P.M.S. Hacker, and
Joachim Schulte. New York: Wiley-Blackwell, 2009.

[2]
Chomsky, Noam. “Verbal Behavior. By B.F. Skinner.” Language (1959)
35:26-58.

[3]
The examples are from Radford, Andrew. Minimalist Syntax. New York:
Cambridge U.P., 2004.

[4]
Hart, Betty, and Todd R. Riseley, “The Early Catastrophe: The 30 Million Word
Gap by Age 3.” American Educator, Spring 2003, 4-9.

[5] Palincsar, A. S., and A.L
Brown. “Reciprocal Teaching of Comprehension-Fostering and
Comprehension-Monitoring Activities. Cognition and Instruction (1984).
1:2, 117-175.

[6]
Heidegger, Martin. Being and Time. Trans. John Macquarrie and Edward
Robinson. New York: Harper, 2008. P. 114.

[7]
Stevens, Wallace. “Credences of Summer,” in The Collected Poems of Wallace
Stevens.
New York: Knopf, 1982, P. 372.

[8]
Bain, Alexander. English Composition and Rhetoric: A Manual. New York:
Appleton, 1866.

[9]
See McCarthy, et al. “Identifying Topic Sentencehood. Behavior
Research and Methods.
40: 647-664 and Popken, R. I. A Study of Topic
Sentence Use in Academic Writing. Written Communication 4: 209-228.

 

 

Mike Klonsky assesses the Illinois funding bill, which set aside for $75 million for vouchers, and declares it is a total sell–out by Democrats.

http://michaelklonsky.blogspot.com/2017/08/il-voucher-vote-reversal-shows-dems.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+mikeklonsky+(SmallTalk)&m=1

“The passage of the IL school funding bill, which included $75M for private school vouchers, is being hailed as a “model of bi-partisanship” and “compromise” by both Gov. Rauner and Speaker Madigan. It was neither. It was instead, an exercise in duplicity on the part of state Democrats and a lifeline for a Republican governor who had become totally isolated and ripe for a 2018 election defeat after his initial veto of the same school funding bill.

“Democrats demonstrated once again that they’re a party that stands for nothing and that their campaign slogan of a “Better Deal” as opposed to resistance, is little more than a call for more opportunism. It’s a recipe for disaster in the upcoming elections where Republicans shouldn’t have a leg to stand on.”

The die was cast, writes Mike, after Rahm met with the local Cardinal and with DeVos.

“There was no “compromise” to be made since the voucher deal between Chicago’s mayor and Cardinal Cupich, had been in the works for months, specifically since Rahm’s closed door meeting with Trump’s education secretary, Betsy DeVos back in April. Monday’s two House votes were mere Madigan-choreographed theater.”

Barry Lynn writes about the dangers of monopolies. Recently he has written critically about Google’s efforts to dominate the tech world.

This became problematic for Barry Lynn, because Google is one of the major financial benefactors of the New American Foundation, which employed Lynn. Eric Schmidt–CEO of Google– was chairman of the board of NAF until 2016. Google and Schmidt’s Family Foundation have given $19 million to NAF.

The New America Foundation just fired Barry Lynn, who has worked there since 2001. Its spokesman said that his ouster had nothing to do with his criticism of Google. Right.

I was a board member of NAF in the early years of this century. What I learned while I was there was that it is not a left-leaning Foundation. It is a corporate-driven Foundation. It is in constant Fund-raising mode, and the board had many corporate moguls, including Eric Schmidt. I enjoyed it because I met Fareed Zakaria and other very cool people.

After a few years, I was asked to leave the board. Unlike Lynn, I never found out why they kicked me out. I have been ousted from some of the finest think tanks in D.C., including Brookings (I was kicked out ostensibly for “inactivity,” but it happened on the very day I lambasted Mitt Romney at the New York Revoew of Books, and my program head was advising Romney). If I lined up the think tanks that ousted me and the ones I abandoned, they would stretch from New York to California.

Lynn is lucky he is out. He is free to write what he wants without fearing the wrath of Google.

In Illinois, Republicans refused to pass legislation to fund public schools unless Democrats agreed to addd $75 million for vouchers for religious schools.

Governor Rauner, who loves charter schools and hates public schools, was happy to go along. So was Democratic Speaker Mike Madigan, who fell in love with the scandal-ridden Gulen charters after several free trips to Turkey. (Now that the government of Turkey wants to imprison Imam Fetullah Gulen for sponsoring a coup attempt, there are not likely to be any more Gulen-paid trips to Turkey.)

Chicago Tribune writer Eric Zorn describes the secretive deal to funnel money to religious schools here.

Zorn will wait and see what happens but it is already well known that most of the money will be spent on children already attending private and religious schools.

What a disgrace!

This is a victory for Betsy DeVos and Trump.