In this article, posted on Valerie Strauss’s blog, Lis Guisbond of FairTest interviews New York opt out leader Jeanette Deutermann about the creeping incursion of online assessment into regular classroom use. i remember hearing New York ‘s Commissioner of Education MaryEllen Elia predict the advent of “embedded assessments,” in which students would be continually assessed, as they complete their assignments online. No need for a “test.” The testing would be daily, continual, and invisible.
Guisbond writes:
Long Island parent Jeanette Deutermann is only half-joking when she says she should give a Christmas gift to her son’s school computer this year instead of the teacher. She sees the way computer-based curriculum-plus-testing packages have taken control of her son’s classroom, and she doesn’t like it.
Deutermann has been a leader in New York State’s unprecedented opt-out movement. Now she is calling out the latest damaging twist in education reformers’ efforts to fatten the pig by weighing it even more often.
Deutermann’s fifth-grade son and his classmates are among those on the edge of this craze, now that their school has adopted a product called i-Ready. She’s alarmed that her son gets daily computer-based math and reading lessons triggered by the results of a computer-based test. He also has thrice yearly (or more) i-Ready exams and even i-Ready-based homework.
She laments a shift away from students learning how to communicate and collaborate with one another on group projects to more and more time in solitary communion with a computer screen…
We already know that high-stakes exams narrow and dumb down instruction, depress student engagement, and produce inaccurate indicators of learning. Now we must be vigilant and prepared to push back against these new threats:
The push for frequent online or computer-based testing threatens to reverse recent progress in reducing testing and lower the stakes attached.
*Instead of schools with trained educators who use their professional expertise to personalize learning for students, these programs perpetuate standardized, test-driven teaching and learning, now automated for “efficiency.”
*Frequent online student assessments require teachers to review copious amounts of data instead of teaching, observing and relating to students.
*In truly student-centered learning, children guided by teachers can choose among topics, materials and books based on their interests and passions. But the vision promoted by many education technology vendors and proponents is of students learning material selected by online or computer-based adaptive assessments.
*Companies and government agencies are amassing unprecedented amounts of student data through online learning and testing platforms. There is widespread concern about accessibility of this data to third parties and violations of privacy through data. Parent groups and others advocate legislation to provide transparency and protect data from misuse. In the meantime, security breaches or data sharing are serious risks.
*Frequent online testing creates obstacles to opting out as a way to call attention to and protest testing overkill. A robust national opt-out movement created enormous pressure for change. But a shift to online exams creates new hurdles for parents who want to opt their children out.
*After several decades, researchers have seen little positive impact from educational technology. Meanwhile, researchers warn of a range of negative consequences from overexposure to technology and screen time. These include damage to intellectual, physical and emotional development, threats to privacy, and, ironically, increased standardization.

See http://edpolicy.education.jhu.edu/wordpress/?p=394
Do Formative Assessments Influence Student Learning?: Research on i-Ready and MAP
Alanna Bjorklund-Young and Carey Borkoski
Research Fellows
Johns Hopkins Institute for Education Policy
November 3, 2016
excerpt: Researchers have conducted numerous analyses of these two assessments. Unfortunately, parties associated with the publishers of the assessments have authored the studies, which inevitably calls objectivity into question. For example, Curriculum Associates, which owns i-Ready, hired Education Research Institute of America (ERIA) to evaluate the tool. More problematic still, MAP, which is published by Northwest Evaluation Association (NWEA), used its own in-house researchers to conduct validity research. In both cases, the resulting research has not been published in peer-reviewed journals. Completely impartial, peer-reviewed research is obviously preferable. In the absence of such research, we report Curriculum Associates’ and NWEA’s findings….
Unfortunately, none of the current research on i-Ready or MAP provides any information on content validity or the validity of these tests at the sub-item or standards level. We cannot assess if these tests are in fact useful assessments for the purpose of increasing student achievement….
The best information currently available about whether either of these formative assessments affects student learning is from a randomized control trial (RCT) that found no effect of MAP (both the test and additional teaching resources provided by the publishers of MAP) on reading achievement for 4th and 5th graders in Illinois (Cordray et al., 2013). The two-year study consisted of 32 elementary schools in 5 school districts in Illinois. Half of the schools were randomly assigned to implement MAP in 4th grade and the other half were randomly assigned to implement MAP in 5th grade. The study investigated whether the MAP program affected reading achievement after the second year of implementation. The results show that, overall, the MAP program did not have a statistically significant impact on students’ achievement in either grade
The lack of a research base on i-Ready and MAP as means for improving student learning is both surprising and disappointing given their widespread use as well as their cost. To be clear, the negative findings of a single study should not be taken as conclusive. Rather, they illustrate how just important it is for states and districts to understand precisely what research suggests about these two tests, and where we have important, unanswered questions that deserve peer-reviewed, external research studies commensurate with the widespread use of these assessments.
LikeLike
The i-ready products are designed to boost test scores. More than anything, they gather data about each student and use this information to refine products. The students contribute data for marketing research.
In my opinion, i-ready products are tissue thin on substance. They are tied to the Common Core. They are sharply focused on formulaic aspects of math and ELA–the gravy train for commercial product, in this case with an attachment to the Common Core.
On the matter of research supporting sales of products, consider this.
In 2015, members of the Education Industry Association (EIA) could pay fees to the Johns Hopkins University in order to secure “research certifications” for their products or services. This fee was a reduced for EIA members. The services were heavily marketed by then Dean of the College of Education, Dr. David W. Andrews, who has since become President of National University (effective April 1, 2016).
Dean Andrews and faculty in the School of Education pitched their services in several sessions of the July 2015 conference of the EIA. The 2016 conference programs for the Education Industry Association featured faculty from Johns Hopkins marketing their “third party” evaluation services. http://www.educationindustry.org/edventures
In April, 2016 the Education Industry Association was dissolved. Members were urged to join the Education Technology Industry Network (ETIN) of the Software & Information Industry Association http://www.siia.net/press/education-technology-industry-network-expands-welcomes-education-industry-association-members
Although the Education Industry Association is dead, the last hurrah included this sales pitch:
It invokes the reputation of Johns Hopkins University in the old fashioned manner of gaining a Good Housekeeping seal of approval for a product or service.
Dear EIA Members and Potential Members:
Strong entrepreneurial education companies are constantly seeking new ways to market and promote their products and services. Proving the efficacy of your product or service is the single best way to attract new customers, making the “procurement process” much simpler.”
“EIA is now offering an amazing opportunity for its current members and for those wishing to join the Association. Beginning immediately, for a very small investment, EIA members can utilize the services of the Johns Hopkins School of Education (JHU).
The team at JHU is offering program design reviews at an extremely discounted rate exclusively for EIA members. There are multiple levels of review your company can participate in, based on your budget and desired review level.”
“I know this might seem a bit intimidating, but, trust me; it is well worth your time and investment. Can you imagine walking into a Superintendent’s office armed with a positive outcome report by none other than the Johns Hopkins School of Education?! Do you think your competitors will have this feather in their cap? The answer is a resounding NO!”
“Picture your new marketing campaign that features your positive outcome with the Johns Hopkins School of Education! And most importantly, imagine what you will learn about your own product or service and the best ways to continually improve in order to produce the best educational outcomes for your students. You actually owe it to yourself, to your investors, and to your students to participate in this incredible opportunity to bring further legitimacy to your company.”
“As you work with the team at JHU, you will choose one of five levels of review: an Instructional Design Review, a Short-Cycle Evaluation Study, a Case Study, an Efficacy Study, or an Effectiveness Study. Choose the level you’re comfortable with; for even a small investment of a few thousand dollars, you can have the Johns Hopkins seal of approval attached to your company.”
“Instructional Design Review: This is the perfect package for many EIA companies. After successfully completing the review process, your company will be issued a Johns Hopkins University Certificate for Completion of a Successful Design Review.
Again, imagine having that ammunition during your next district meeting! Using rubric assessments aligned with instructional design standards and best practices, your products and programs will be reviewed in domains that include the logic of your model, its theoretical framework, your use of evidenced-based strategies, customer analyses, instructional objectives, pedagogy, and delivery/user support. $3,500-$5,000″
“THE FOLLOWING OPTIONS ARE ALSO AVAILABLE FOR LARGER, MORE ESTABLISHED EIA COMPANIES.
Short-Cycle Evaluation Study: These are quick-turnaround “pilots” of products (typically ed-tech based), which use observations, surveys, and interviews with teachers and students in a 10 to 15 week period to determine the potential effectiveness of a product for broader adoption in a school district or group of schools. … $10,000-$13,000″
“Case Study: These are small mixed-methods descriptive studies, which are more intensive and rigorous than short-cycle studies. …. $15,000-$20,000″
“Efficacy Study: This is a medium-scale study that focuses on how programs and educational offerings operate and affect educational outcomes…. $20,000-$35,000″
“Effectiveness Study: This is a larger-scale “summative evaluation” study that focuses on the success of the program in improving outcomes…. $38,000-up.”
“Again, the first offering – the Instructional Design Review – is the perfect fit for many EIA companies. To get started you only need to do two things: be an EIA member at any level of membership (and if you’re not a member, NOW is the time to join) and then contact me directly to put you in touch with the Johns Hopkins School of Education.”
You can see the legacy pitch, more description of the tiers of research and pricing for EIA members (as of 2015) here. http://www.educationindustry.org/assets/documents/jhu announcement.pdf
The Center for Research and Reform in Education (CRRE) at John’s Hopkins still offers “research” services, but the current pricing is not published and the examples of their contracts are fairly conventional.
CRRE is receiving $695,000 over 5 years for an evaluation of the Baltimore County Public School tech-friendly “Students and Teachers Accessing Tomorrow” program. http://education.jhu.edu/research/crre/Evaluation%20Services/
There is a lot of snake-oil out there.
LikeLike
Yes, Laura, and a whole lot of that snake oil comes from poisonous snakes. 😦
LikeLike
I agree those prices sound like a slick sales job.
I am more familiar with MAP-r than iready math. I find the adaptive model for MAP-r very interesting. Used as a “formative” it probably does not do a lot. Used as a predictive tool it also probably only does a little. As a summative replacement for end of year testing I don’t think we should dismiss it entirely. It may very well take less time to administer. I do agree that trying to use this four times or in a continuous fashion a year is silly.
LikeLike
“Unfortunately, none of the current research on i-Ready or MAP provides any information on content validity or the validity of these tests at the sub-item or standards level. We cannot assess if these tests are in fact useful assessments for the purpose of increasing student achievement….”
That psychometric “validity” is nothing more than whether each question functions as a good discriminator so that the answers align themselves in a normal curve fashion. And then whether the total number of answers line up in a normal curve fashion. It is an “internal” validity that does not address the underlying fundamental invalidity of the whole process due to all the onto-epistemological errors and falsehoods and other psychometric fudgings that occur during the making of standards and the accompanying tests.
Noel Wilson proved that COMPLETE INVALIDITY in his never refuted nor rebutted (you’d think that in 20 years the testing industry would have come up with some sort of legitimate objection to his work-that is were there to be any, and there aren’t) 1997 dissertation
“Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700
Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine.
A description of a quality can only be partially quantified. Quantity is almost always a very small aspect of quality. It is illogical to judge/assess a whole category only by a part of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as unidimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing the descriptive information about said interactions is inadequate, insufficient and inferior to the point of invalidity and unacceptability.
A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other words all the logical errors involved in the process render any conclusions invalid.
The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. And a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it attempts to measure “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self-evident consequences. And so the circle is complete.”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
LikeLike
A lot of words to say the tests haven’t been shown to be well aligned, either to the standards themselves nor to our goals overall.
LikeLike
Dave,
Wilson nor I say nothing about the tests not being aligned with the standards but we do speak of the overall goals of education (and for me that means public education in the USA).
The concept of standards is, again, fundamentally conceptually (onto-epistemologically) bankrupt in regards to the teaching and learning process.
If you would like to understand more feel free to contact me at dswacker@centurytel.net and I will email you a draft copy of the chapters on the purpose of public education and on standards and measurement. It’s a tad too much to post here but not to much to read, only about 6-7 pages each chapter. Will be glad to send them to you.
LikeLike
This is a useful finding sort of. I wonder if it is better to call the tests semisummative (despite their marketing) and ask not whether they improve a particular outcome but whether they provide teachers with actionable interim information that other products do not. The answer could be yes.
LikeLike
It’s probably no.
LikeLike
STANDARDIZATION is what the yahoos want for other people’s kids. This way the yahoos can dominate us and collect DATA to market. SAD and SICK.
LikeLike
Well map-r is not entirely standardized in a normal sense. It is adaptive. So in that sense the validity of its standardization is in question but that might be a good thing.
Also I have to say that outside researchers have studied map-r (maybe not iready). See http://ies.ed.gov/ncee/edlabs/regions/midwest/pdf/REL_20134000.pdf
The overmarketing may be a serious concern in terms of the data collection, I agree. nwea motives for collecting student data are surely profit driven, but I feel like they have a better privacy policy than most companies.
LikeLike
All online assessments are standardized. The questions are written by someone outside the classroom and are usually multiple choice. Those that are not multiple choice are scored by a computer or by a low-wage employee who is paid to skim. Hired off Craig’s List for $11 an hour.
LikeLike
Dave,
If I may ask, are you a public school teacher? If so, how long, what subject/grade and how did you obtain your credentials? If not a teacher or adminimal what is your profession/job? I ask only because it helps me to understand “where you are coming from”.
Again, feel free to contact me if you would like to read more on the malpractices that are standards and standardized testing.
LikeLike
Schools are also using i Ready programs for remediation. Teachers and some school unions don’t seem to have any problem with it. I always laugh when teachers are accepting of more and more technology in their classrooms or when I see pictures of classrooms of students on their Chrome books. Everyone ohhhs and ahhhhs but they are giving away their jobs to tech. Don’t think it can happen to you??? Yes, it can.
LikeLiked by 1 person
Sadly, in many cases teachers aren’t given any say in the matter. You do what is mandated or you don’t have a job.
LikeLike
Microsoft, Google, and the rest of them have powerful, extremely well funded advertising campaigns. Very few people, even highly educated people, are not susceptible to them. Most importantly, while much of the public is waking up to the lack of privacy on the internet, for some reason, only activists seem to care.
LikeLike
Good point, left coast teacher.
LikeLike
I have to use iReady and I have serious problems with it. My students quickly realize that choosing the wrong answers will give them easy lessons they can do quickly. I lost a battle with parents of a seventh grader who called in my administrator to demand I give their son credit for pre-reading lessons. He reads slightly below grade level, but they insisted that if the computer told him to click on the picture of the red balloon as a reading lesson, it must be the right thing for him. I was told by administration that I was in the right but could not give him a zero.
Another child admitted he left the computer on while he went shopping so the program would count him as having struggled for hours with a single question.
LikeLike
Yes on the giveaway of jobs that is a good appraisal. I do think that there will always be a need for rti testing that identifies areas of need. You can’t eliminate it entirely. The questions are what should it look like how often does it happen what is it used for how does it fit with the overall standards and testing done in the state. I don’t have any reason to think iready is better or worse than Pearson’s rti tests or fastbridge or anything else.
But no one should think that the iready is the intervention. I hope that’s not happening. But maybe it is in some places. We have a major allocation of resources problem in rti anyway so the fact that computer testing is being looked at as an answer is just a symptom it is not the disease itself.
We have no idea how to run rti on the scale that is being asked that is the disease.
LikeLike
“rti testing” Help me Dave, I’m self diagnosed AIIDS*!
*Acronym Identification Impaired Disorder Syndrome (to be introduced in the DSMVI)
LikeLike
It occurs to me that the new NYC teacher evaluation agreement is the antithesis of “computerized” education. Teachers will be evaluated on what their children are learning, yes. But the “metric” will not be a standardized test score or an I-Ready score. It will be the actual work the children in that teacher’s class have been doing, as well as observations of the teacher’s craft.
LikeLike
Does anyone know the pros and cons of SumDog? It’s an online math website our school assigns as homework.
LikeLike
Does your child have a login password? If so, your child’s name and possibly other personal information is probably accessible by third parties including advertising agencies, potential future employers, and the United States military. Not to scare you, but that is a oftentimes overlooked downside of technology.
Regarding the quality of the “instruction” online provided by SumDog, many websites that do not require logging in are useful as supplements of instruction. (They are never acceptable as replacements of human teaching during class.) I recommend asking your child what she or he learned in class that day, and then check to see that the homework is practice with that particular lesson, not randomly assigned computer work. It’s not just the quality of the website that matters; it’s how the teacher uses it.
LikeLike
…AN overlooked downside… Sorry for the grammatically incorrect determiner.
LikeLike
Thank you LeftCoast Teacher. Our school has a subscription to this service and has really been pushing it this year as part of the homework. Yes, there is a login associated with each student. Don’t know how to handle the PI collection. My kids enjoy doing math online but there are things that irk me. The program wants to appeal to the pre-social media set by incorporating “fun” activities such as friending other students and giving them virtual gifts and dressing up their personas with different outfits/hair dos/settings. I have to keep reminding my kids to do math and not the other crap.
LikeLike
There’s a whole lot on and under the surface not to like about that. Sounds like the company is compiling personality data to make individual profiles — gender stereotypical profiles. (I wonder if tech firms using online application processes will ever start hiring women in the same number as men.) Again, I don’t mean to scare you or suggest you quietly write any friendly, private letters of concern or protest to the teacher. It’s just imperative that parents and teachers become more informed about and protective of children’s data profiles.
LikeLike
OK thank you LeftCoast Teacher for raising issues I hadn’t considered. So grateful to Diane Ravitch for this forum.
LikeLike
I think the parents are going to have be the leaders in this battle. Parents should attend districts’ board meetings in groups and, get a spot on the agenda. They should express their concerns about too much screen time and privacy. They should also discuss the positive value derived from collaboration and human interaction. They should have some brief research to give to board members. If the district refuses to budge, they should insist they have the right to opt out, and they want the teacher to assess with classroom tests and work. This type of embedded technology and testing is an insidious “nose of the camel in the tent.”
LikeLike
An online curriculum is not a curriculum. Another reformster phrase for your dictionary, Diane. When you plop your kids down in front of a screen, learning is not taking place. The human brain did not evolve to take cues from pixelated, flashing lights. We were not meant to communicate with a server. A real curriculum involves listening and speaking, pencils and books.
LikeLike
I-Ready?
That’s the first clue, right there, the horrible grammar. “I ready” is only acceptable usage among toddlers first figuring out how to speak.
LikeLike
This post has served to convince me to file for our board of ed election in this April’s Election. Kind of the last straw thing. Will keep all informed
LikeLike
Go get ’em, Duane. Mucho gusto.
LikeLike
Do it, Duane!
LikeLike
I don’t know. Adaptive testing online may have some use for rapid assessment of a student’s grade level equivalent at the beginning of the year so you can determine an individualized plan for each student. Continuous testing is scary but I do not share your concern that this is an inevitable outcome of experimenting with online products. Tech has its place. It will not fix everything everywhere but it has its place.
LikeLike
Dave, I agree, tech has its place. However, it is being misused and abused by corporations seeking more tech, more profits. Teachers are not controlling the use of tech. Others are. Having children assessed continuously–every time they open their iPad or notebook–is intrusive and changes the nature of instruction. It is a misuse of tech. It also facilitates data mining, which is abusive.
LikeLike
I know more about how my students are doing by WATCHING and INTERACTING with them than any computer program could tell me.
LikeLike
Exactly!
Anyone who thinks that a computer program can interpret what a student knows, is feeling, his/her desires and fears, thinks ignorantly.
LikeLike
Kids are data, and data is For Sale.
LikeLike
As a health and PE teacher, I create my own online assessments mainly through Google Forms and Slides. My students often complete these at home, but they are brought back to the classroom for sharing and discussion via laptop circles. For example, students will create a short slide presentation on a nutrition topic and then present to their peer groups on its due date. I also use Google forms to flip instruction, or as formative means of student self assessment through rubric scales just before a project is due. My point is, I don’t adhere to a prescribed online curriculum. I create my own assessments and learning tools and then use them to drive collaborative student learning in the classroom.
LikeLike