Benjamin Herold of Education Week reports that students who took the PARCC test online got lower scores than those who took the test with paper and pencil.
“Students who took the 2014-15 PARCC exams via computer tended to score lower than those who took the exams with paper and pencil-a revelation that prompts questions about the validity of the test results and poses potentially big problems for state and district leaders.
“Officials from the multistate Partnership for Assessment of Readiness for College and Careers acknowledged the discrepancies in scores across different formats of its exams in response to questions from Education Week….
“It is true that this [pattern exists] on average, but that doesn’t mean it occurred in every state, school, and district on every one of the tests,” Jeffrey Nellhaus, PARCC’s chief of assessment, said in an interview….
“In general, the pattern of lower scores for students who took PARCC exams by computer is the most pronounced in English/language arts and middle- and upper-grades math.
“Hard numbers from across the consortium are not yet available. But the advantage for paper-and-pencil test-takers appears in some cases to be substantial, based on independent analyses conducted by one prominent PARCC state and a high-profile school district that administered the exams.
“In December, the Illinois state board of education found that 43 percent of students there who took the PARCC English/language arts exam on paper scored proficient or above, compared with 36 percent of students who took the exam online. The state board has not sought to determine the cause of those score differences.”

Follow the link to the article. In Illinois the diff in scores is about 7%. Substantial
Who dares to teach must never cease to learn.’ Janice Preston, Owner and Consultant Janice Preston Educational Services Phone: 708.250.0603 Email | Blog | LinkedIn Like my blog? Sign up to receive email notifications of new posts.
Sent from my iPhone
>
LikeLike
“Who dares to teach must never cease to learn.’”
And that learning and teaching mean nothing, or actually cause harm and injustice when teaching and learning do not adhere to “fidelity to truth.”
LikeLike
This should come as no surprise.
Everyone knows that the electrons in graphite pencils are smarter than those in copper wires.
Most of the time, the ones in the wires can’t even remember where they live (which atoms they belong to).
LikeLike
They’ll never know, right? The test-takers? The volunteers?
I love how they’re completely left out of this.
LikeLike
Ha Ha Ha !
LikeLike
Laughing out loud!
Another reason to OPT OUT!
LikeLike
I am thinking of the complicated screens and scrolling I saw while taking some of the practice tests. I yearned for a paper test.
I am also thinking that there other aspects of this problem that will unfold, like the common practice of eliminating or cutting back on cursive writing with more time invested in keyboarding skills. Also grade levels might matter in all of these tests, also the relative ease of response for left-versus right-handed students, and so on.
The bigger point is that there does not seem to have been much field testing prior to this hind-sight “discovery” of differences in scores.
LikeLike
Cursive writing? That went out the window years ago apparently. Almost all the students in my high school special ed classes could not read or write cursive. I actually posted the cursive alphabet and demonstrated how to sign their names in cursive for several of them at their request.
LikeLike
This does not surprise me. I wrote about the issue of computer-adaptive vs. paper-and-pencil tests when word of plans to do these tests on computers first came out. In my view, there are few advantages and many disadvantages to test-takers with the computer-adaptive method. It takes control of the process out of the hands of the student thus limiting or eliminating a host of choices that the wise test-taker would normally want to be able to make.
For example, if I am taking a paper-and-pencil exam, I can do questions in whatever order I see fit (at least within a given assigned section). That is not only psychologically empowering but also practically important: it allows me to skip to questions that seem most accessible to me, avoid more difficult questions until I have a better sense of what’s going on (I’m thinking here about typical reading comprehension passages and question sets); I can use understanding gleaned from answering questions I grasp to parse and answer questions I’ve chosen to temporarily skip.
And, of course, this leads to the issue of being able to go back to earlier questions. Whether the test section is reading comprehension, science, mathematics, or what-have-you, it’s not unlikely that answering a question (or even just reading a question) further along will provide information or insight that will allow me to answer (or correct a wrong answer) from an earlier question. If the computer won’t let me go back, I’m sunk. On a paper-and-pencil test, I can always backtrack within the current section.
How can it possibly be to the student’s advantage to lose such options?
And I’m haven’t gotten into the “adaptive” aspect in which the computer starts narrowing down what questions to ask the student next based on his/her answers to previous questions. This feature can reduce the time students have to spend taking the test, a fact many students love, but which is really not an advantage if they are interested in maximizing their score. Students who start slowly may find themselves “finished” (in more ways than one) before they’ve gotten a chance to get their brains in gear. The computer can fairly quickly peg such students as “low-performing” and kick them out of the test. And that’s just the tip of the iceberg, on my view.
No test or testing format is perfect or equally fair to every sort of student. So these questions must be posed over and over: Why are we giving a particular test? Who benefits? What is the information being used for?
If test-makers and those mandating the administration of tests are not accountable to the public for giving open, honest answers to the above questions and other related inquiries, then the system is unlikely to serve the best interests of students, parents, or educators. Rather, it will primarily or entirely serve the interests of the test publishers and the political interests who back punitive testing. And that is precisely the current corrupt and deeply unfair situation we have.
LikeLike
Is there a link to your work? Ed Week is behind a paywall, and I want to send this study to my state legislators, who even now are discussing 1:1 computing initiatives. I’ve already sent them the OECD study, and I want as much ammunition against this stupid idea as possible.
LikeLike
PARCC Scores Lower for Students Who Took Exams on Computers
By Benjamin Herold Education Week Feb. 3, 2016
Enlarge
Seventh graders at Marshall Simonds Middle School in Burlington, Mass., look at a PARCC practice test to give them some familiarity with the format before field-testing in 2014 of the computer-based assessments aligned with the common core. -Gretchen Ertl for Education Week-File
Students who took the 2014-15 PARCC exams via computer tended to score lower than those who took the exams with paper and pencil-a revelation that prompts questions about the validity of the test results and poses potentially big problems for state and district leaders.
Officials from the multistate Partnership for Assessment of Readiness for College and Careers acknowledged the discrepancies in scores across different formats of its exams in response to questions from Education Week.
“It is true that this [pattern exists] on average, but that doesn’t mean it occurred in every state, school, and district on every one of the tests,” Jeffrey Nellhaus, PARCC’s chief of assessment, said in an interview.
“There is some evidence that, in part, the [score] differences we’re seeing may be explained by students’ familiarity with the computer-delivery system,” Nellhaus said.
In general, the pattern of lower scores for students who took PARCC exams by computer is the most pronounced in English/language arts and middle- and upper-grades math.
Hard numbers from across the consortium are not yet available. But the advantage for paper-and-pencil test-takers appears in some cases to be substantial, based on independent analyses conducted by one prominent PARCC state and a high-profile school district that administered the exams.
In December, the Illinois state board of education found that 43 percent of students there who took the PARCC English/language arts exam on paper scored proficient or above, compared with 36 percent of students who took the exam online. The state board has not sought to determine the cause of those score differences.
Meanwhile, in Maryland’s 111,000-student Baltimore County schools, district officials found similar differences, then used statistical techniques to isolate the impact of the test format.
A student at Marshall Simonds Middle School in Burlington, Mass., reviews a question on a PARCC practice test before 2014 field-testing of the computer-based assessments.
-Gretchen Ertl for Education Week-File
They found a strong “mode effect” in numerous grade-subject combinations: Baltimore County middle-grades students who took the paper-based version of the PARCC English/language arts exam, for example, scored almost 14 points higher than students who had equivalent demographic and academic backgrounds but took the computer-based test.
“The differences are significant enough that it makes it hard to make meaningful comparisons between students and [schools] at some grade levels,” said Russell Brown, the district’s chief accountability and performance-management officer. “I think it draws into question the validity of the first year’s results for PARCC.”
4 of 5 PARCC Exams Taken Online
Last school year, roughly 5 million students across 10 states and the District of Columbia sat for the first official administration of the PARCC exams, which are intended to align with the Common Core State Standards. Nearly 81 percent of those students took the exams by computer.
Scores on the exams are meant to be used for federal and state accountability purposes, to make instructional decisions at the district and school levels, and, in some cases, as an eventual graduation requirement for students and an eventual evaluation measure for teachers and principals.
Several states have since dropped all or part of the PARCC exams, which are being given again this year.
PARCC officials are still working to determine the full scope and causes of last year’s score discrepancies, which may partly result from demographic and academic differences between the students who took the tests on computers and those who took it on paper, rather than the testing format itself.
Assessment experts consulted by Education Week said the remedy for a “mode effect” is typically to adjust the scores of all students who took the exam in a particular format, to ensure that no student is disadvantaged by the mode of administration.
PARCC officials, however, said they are not considering such a solution. It will be up to district and state officials to determine the scope of any problem in their schools’ test results, as well as what to do about it, Nellhaus said.
Such uncertainty is bound to create headaches for education leaders, said Michael D. Casserly, the executive director of the Council of the Great City Schools, which represents 67 of the country’s largest urban school systems.
“The onus should be on PARCC to make people aware of what these effects are and what the guidelines are for state and local school districts to adjust their data,” Casserly said.
Comparing Online and Paper Tests a Longstanding Challenge
The challenges associated with comparing scores across traditional and technology-based modes of test administration are not unique to PARCC.
The Smarter Balanced Assessment Consortium, for example, told Education Week that it is still investigating possible mode effects in the results from its 2014-15 tests, taken by roughly 6 million students in 18 states. That consortium-which, like PARCC, offers exams aligned with the common core-has yet to determine how many students took the SBAC exam online, although the proportion is expected to be significantly higher than in PARCC states.
Officials with Smarter Balanced are in the early stages of preparing technical reports on that and other matters.
“We’ll analyze the operational data. I can’t speculate in advance what that implies,” Tony Alpert, the executive director of Smarter Balanced, said in an interview. “We don’t believe that differences in scores, if there are any, will result in different decisions that [states and districts] might make based on the test.”
States that administer their own standardized exams, meanwhile, have for years conducted comparability studies while making the transition from paper- to computer-based tests. Past studies in Minnesota, Oregon, Texas, and Utah, for example, have returned mixed results, generally showing either a slight advantage for students who take the tests with paper and pencil, or no statistically significant differences in scores based on mode of administration.
The National Center for Education Statistics, meanwhile, is studying similar dynamics as it moves the National Assessment of Educational Progress, or NAEP, from paper to digital-administration platforms.
An NCES working paper released in December found that high-performing 4th graders who took NAEP’s computer-based pilot writing exam in 2012 scored “substantively higher on the computer” than similar students who had taken the exam on paper in 2010. Low- and middle-performing students did not similarly benefit from taking the exam on computers, raising concerns that computer-based exams might widen achievement gaps.
A still-in-process analysis of data from a study of 2015 NAEP pilot test items (that were used only for research purposes) has also found some signs of a mode effect, the acting NCES commissioner, Peggy G. Carr, told Education Week.
“The differences we see across the distribution of students who got one format or another is minimal, but we do see some differences for some subgroups of students, by race or socioeconomic status,” she said.
One key factor, according to Carr: students’ prior exposure to and experience with computers.
“If you are a white male and I am a black female, and we both have familiarity with technology, we’re going to do better [on digitally based assessment items] than our counterparts who don’t,” she said.
The NCES is conducting multiple years of pilot studies with digitally based items before making them live, in order to ensure that score results can be compared from year to year.
A PARCC spokesman said the consortium did analyze data from a 2014 field test of the exam to look for a possible mode effect, but only on an item-by-item basis, rather than by analyzing the exam taken as a whole. The analysis found no significant differences attributable to the mode of administration.
When asked why 2014-15 test scores were released to the public before a comprehensive analysis of possible mode effects was conducted, Nellhaus, PARCC’s chief of assessment, said responsibility rests with the states in the consortium. “People were very anxious to see the results of the assessments, and the [state education] chiefs wanted to move forward with reporting them,” Nellhaus said. “There was no definitive evidence at that point that any [score] differences were attributable to the platform.”
Illinois, Baltimore County Find Differences in PARCC Scores By Testing Format
The Illinois state school board made its PARCC results public in mid-December. In a press release, it made indirect mention of a possible mode effect, writing that the board “expects proficiency levels to increase as both students and teachers become more familiar with the higher standards and the test’s technology.”
A comparison of online and paper-and-pencil scores done by the state board’s data-analysis division was also posted on the board’s website, but does not appear to have been reported on publicly.
That analysis shows often-stark differences by testing format in the percentages of Illinois students who demonstrated proficiency (by scoring a 4 or 5) on PARCC English/language arts exams across all tested grades. Of the 107,067 high school students who took the test online, for example, 32 percent scored proficient. That’s compared with 50 percent for the 17,726 high school students who took the paper version of the exam.
The differences by format are not so pronounced in elementary-grades math; in grades 3-5, in fact, slightly higher percentages of students scored proficient on the online version of the PARCC exam than on the paper version.
But proficiency rates among paper-and-pencil test-takers were 7 to 9 points higher on the 8th grade and high school math exams.
The Illinois board has not conducted any further analysis of the results to determine the cause of those discrepancies. Board officials declined to be interviewed.
“The statewide results in Illinois suggest some differences in performance between the online and paper administrations of the assessment,” according to a statement provided by the board. “There is no consistent relationship from district to district. … Both versions of the test provide reliable and valid information that teachers and parents can use to identify student strengths and areas needing improvement.”
In Maryland, meanwhile, more than 41,000 Baltimore County students in grades 3-8 took the PARCC exams in 2014-15. Fifty-three percent of students took the math exam online, while 29 percent took the English/language arts exam online. The mode of test administration was decided on a school-by-school basis, based on the ratio of computers to students in each building’s largest grade.
Like Illinois, Baltimore County found big score differences by mode of test administration. Among 7th graders, for example, the percentage of students scoring proficient on the ELA test was 35 points lower among those who took the test online than among those who took the test on paper.
To identify the cause of such discrepancies, district officials compared how students and schools with similar academic and demographic backgrounds did on each version of the exams.
They found that after controlling for student and school characteristics, students were between 3 percent and 9 percent more likely to score proficient on the paper-and-pencil version of the math exam, depending on their grade levels. Students were 11 percent to 14 percent more likely to score proficient on the paper version of the the ELA exam.
“It will make drawing comparisons within the first year’s results difficult, and it will make drawing comparisons between the first- and second-year [PARCC results] difficult as well,” said Brown, the accountability chief for the Baltimore County district.
“This really underscores the need to move forward” with the district’s plan to move to an all-digital testing environment, he said.
A Big ‘Bug in the System’
In the meantime, what should state and district leaders, educators, and parents make of such differences?
The test results still have value, said Nellhaus of PARCC.
“This is still useful and important information providing a wealth of information for schools to improve instruction and identify students who need assistance or enrichment,” he said.
But possible mode effects on multistate-consortia exams should be taken seriously, at least in the short term, and especially if they have not been accounted for before test results are reported publicly, said assessment experts consulted by Education Week.
“Because we’re in a transition stage, where some kids are still taking paper-and-pencil tests, and some are taking them on computer, and there are still connections to high stakes and accountability, it’s a big deal,” said Derek Briggs, a professor of research and evaluation methodology at the University of Colorado at Boulder.
“In the short term, on policy grounds, you need to come up with an adjustment, so that if a [student] is taking a computer version of the test, it will never be held against [him or her],” said Briggs, who serves on the technical-advisory committees for both PARCC and Smarter Balanced.
Such a remedy is not on the table within PARCC, however.
“At this point, PARCC is not considering that,” Nellhaus said. “This needs to be handled very locally. There is no one-size-fits-all remedy.”
But putting that burden on states and school districts will likely have significant implications on the ground, said Casserly of the Council of the Great City Schools.
“I think it will heighten uncertainty, and maybe even encourage districts to hold back on how vigorously they apply the results to their decisionmaking,” he said.
“One reason many people wanted to delay the use [of PARCC scores for accountability purposes] was to give everybody a chance to shake out the bugs in the system,” Casserly added.
LikeLike
Are you sure PARCC is a computer-adaptive test? I thought it wasn’t.
LikeLike
FLERP,
Why would EdWeek report this story if PARCC is not offered in both formats–online and paper?
LikeLike
Computer-based tests aren’t necessarily “computer-adaptive.”
LikeLike
Agreed, FLERP. I don’t know if PARCC is adaptive or not. But that’s where online testing is headed. I heard Supt. Elia in NY say that testing would be embedded so that there would not be a test day. Testing would go on whenever the student was online.
LikeLike
MPG,
“Who benefits?” When spring test results are given to schools in Oct, Nov neither students nor the classroom teachers conducting daily instruction benefit.
LikeLike
Thank you SO much for this, Diane!
LikeLike
I really think your description of the strategies for tackling a test that are lost with current computer applications are extremely important. These strategies really involve common sense thinking that we use every day when we are trying to make sense of information.
LikeLike
Test results could be instantaneous and would still be invalid and useless.
Does it really matter when Pearson reveals the artificial, super failure rates?
These are not criterion referenced tests. Norm referencing tells us nothing but their score relative to others.
PARCC/SBAC/Pearson NY are academic death traps designed to trick, frustrate, tire-out, and break the will of young children.
TEST-BASED reform is now a proven FAILURE. Fifteen years of testing and virtually zero improvement in actual reading, writing, or arithmetic/computation skills.
LikeLike
“. . . that prompts questions about the validity of the test results and poses potentially big problems for state and district leaders.”
The answers to those questions were given by Noel Wilson in 1997–the test results are COMPLETELY INVALID and can never be due to the myriad epistemological and ontological errors and falsehoods and psychometric fudges involved in the process of making educational standards and the accompanying standardized test.
“‘It is true that this [pattern exists] on average, but that doesn’t mean it occurred in every state, school, and district on every one of the tests,’ Jeffrey Nellhaus, PARCC’s chief of assessment, said in an interview….”
And what he says doesn’t make a damn bit of difference in the COMPLETE INVALIDITY of ol Jeffrey’s PARCC test. It has been and always will be COMPLETELY INVALID as proven by Wilson in his never refuted nor rebutted dissertation “Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700
Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine.
1. A description of a quality can only be partially quantified. Quantity is almost always a very small aspect of quality. It is illogical to judge/assess a whole category only by a part of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as unidimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing the descriptive information about said interactions is inadequate, insufficient and inferior to the point of invalidity and unacceptability.
2. A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
3. Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
4. Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other words all the logical errors involved in the process render any conclusions invalid.
5. The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. And a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
6. Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
7. And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it attempts to measure “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self-evident consequences. And so the circle is complete.”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
LikeLike
“. . .and can never be. . . ” should be “and can never be otherwise”
LikeLike
@FLERP! No, I’m not sure that PARCC is currently computer-adaptive, but as Diane said, that’s in the works, and it’s a trend in high-stakes testing that has to be repeatedly interrogated. If we don’t look at the down-side of various “improvements” that computer-based testing offer and how those things, including computer-adaptive testing, are likely to hurt test-takers more than help them, the test-makers and politicians and investors will once again have changed the rules of the game with little or no informed opposition or questioning.
As @2old2teach said, my points are pretty much a matter of common sense, but only to those who have given serious thought to how standardized testing works. I’ve been tutoring and teaching students how to improve their performance on such tests since the late 1970s. And I would never voluntarily agree to take a test on a computer (vs. paper-and-pencil) unless I knew that I was not being stripped of the freedoms I mentioned in my first comment. When students can’t go back, can’t answer questions in the order that suits them, can’t change answers they’ve submitted, etc., they are being deprived of opportunities to make the most use of their thinking. No teacher-generated test given on paper is designed to minimize students’ chances to show what they know. These computer-based tests do that in significant ways.
Whether these limitations are part of some conscious, insidious plan to hurt student performance or simply examples of unintended consequences of making things faster, more automated, and less vulnerable to cheating (there won’t be a Rheerasure-gate with computer-based testing, I suppose), I can’t say. But I am certain that parents, students, and educators should be better-informed than most of them are about the implications and potential disadvantages of this approach. And note that everything I wrote before mentioning computer-adaptive test issues can still hold for any computer-based test. If students must do questions in the order presented, cannot change answers once submitted, cannot return to previous questions in a section, then they are being hamstrung from applying important test-taking strategies and prevented from using their best thinking to the maximum.
The test-makers have always had a stranglehold on the rules. They are very much like the folks who run casinos in this regard. If you know anything about Blackjack, you can appreciate my suggestion that these computer “improvements” are akin to letting the house change the game so that the deck is reshuffled after every hand. At that point, the tiny edge that skilled players have is utterly eliminated. Maybe the ETS will bring back non-referential reading next, an old format from one of its tests (I think it was on the LSAT at some point) in which students read passages and then must answer multiple-choice questions on those passages but without being allowed to refer back to the text of the passages at all. Fun times!
LikeLike
SBAC is computer-adaptive Enugu PAARC is not.
LikeLike
Ask any teacher. Students are taught to take notes, underline key ideas, and use coding while reading. These strategies are either unavailable or difficult to manipulate on a computer. Also, it’s easier to flip between questions and passages on a paper test.
LikeLike
No idea how “but” became Enugu above😜
LikeLike