Teachers in Middlesex County were surveyed about their views of PARCC testing. New Jersey is one of the few states that continues to participate in PARCC. Originally, there was a consortium with 24 states. Now there are five.
Here is the discouraging, but not surprising, findings:
Middlesex County Education Association Releases Findings of First-Time Survey of Teachers After PARCC Testing: Finds Significant Problems
June 2016
Some 1285 Middlesex County teachers and school professionals voluntarily and anonymously participated in a survey regarding the impact of the PARCC standardized test on students, schools, and instruction. In the most comprehensive survey conducted in New Jersey to date, the results showed serious issues for the new testing regimen. The major findings of the survey include:
• Feedback from the test was significantly delayed or not distributed to teachers.
• Conditions under which the PARCC test was taken draw into question the validity of the results.
• PARCC and related test preparation have negatively impacted many students and raised concerns for many parents.
• The new test is a significant drain of instruction time and a disruption to classes.
• As a result of the PARCC test, students have limited access to library media centers and computers as well as special services and programs.
• The testing/evaluation environment has had a negative impact on teachers and staff.
Delay in Receiving Feedback
In spite of the NJ Department of Education’s promises of rich feedback from PARCC testing to teachers to improve instruction, 34% of teachers of tested areas (English and Mathematics) did not receive their students’ spring 2015 test scores until January 2016 or later. Another 24% never received them. After the state spent $22.1 million dollars and local districts spent millions more to implement the PARCC, less than 2% of these teachers found the data collected from PARCC to be an improvement from past state standardized tests.
Validity of PARCC Test Questionable and Students Impacted Negatively
Nearly half of educators reported directly observing administration problems, technical or otherwise, which could have negatively impacted student test scores. In addition, 59% reported observing students refusing to take the test seriously, resulting in invalid scores. In spite of these issues, the state of New Jersey continues to use PARCC test scores as part of evaluations for teachers of tested areas grades 4 to 8 and these scores are projected to play a larger role for more teachers in the future.
PARCC testing has had a pronounced effect on many students. 57% of school teachers and support staff reported increased anxiety and depression among students related to testing and 42% reported increased negativity and loss of interest in school by students overburdened by testing. Only 14% reported no observed problems for their students. One teacher wrote, “My first graders are worried about future testing.” Another teacher noted “I recently looked at old yearbooks – ten years ago our kids did fantastic learning projects. Now, all we do is data driven instruction and testing.”
A notably high percentage, 60% of educators, reported parents expressing complaints, apprehension, or concerns about the PARCC test directly to them. This reflects the previously publicly reported concerns of parents shown by the tens of thousands of New Jersey children who opted out of the PARCC test last year at their parents’ request.
Impact of PARCC Testing on Instruction
In terms of lost instructional time to PARCC, 34% of teachers of tested areas (English and mathematics) reported spending 11-20 hours on PARCC testing this year and 35% spent more than 20 hours in PARCC testing. In addition, 38% of these teachers estimated spending another 6 or more instructional days on test prep, and nearly half of these teachers lost additional time to pre-testing to identify weak students before the PARCC test. Over 36% reported that their schools purchased commercially prepared pre-tests for use prior to the PARCC.
Educators widely reported that PARCC testing resulted in the closing of library media centers and loss of access to computers for extended periods of time, disruption of class schedules and routines, and loss of time for special services such as speech therapy and counseling. Advanced Placement teachers complained of the loss of valuable instructional time just prior to the AP tests for college credits. Other teachers commented that the guidance department, child study teams for special education students, and much of the school administration were barely available for up to a month for testing. A special education teacher reported that many special education students had substitute teachers for several weeks while their special education teachers administered the PARCC to other students in other grades.
Impact on Teachers and Professional Staff
As a result of the new evaluations for students and teachers, 91% of those surveyed reported an increased workload, primarily in the form of increased paperwork and documentation of work done rather than increased time working with students. Nearly 62% reported that PARCC testing has had a definite negative or strongly negative effect on themselves and their colleagues.
While 83% reported that 5 years ago they were either satisfied or strongly satisfied with their jobs, only 34% said they were satisfied or strongly satisfied with their jobs today. Representative of many of the comments in the survey, one teacher stated, “I have always loved my job, but the last few years with the implementation of the state testing and new teacher evaluation system, I am seriously considering retiring early and dissuading my own children from seeking this profession.”
Another teacher commented, “I wish I could just TEACH and do the things in my classroom that I know will lead to real learning based on the needs of my students and not on some politician’s ever-changing agenda. I am truly saddened by what is happening; the students are not being served. This is the first group of students I have had that have been working with Common Core standards their entire school careers, and I must say that they are the least prepared and have the biggest skill gaps of any
group I have had in decades. Rushing through developmentally inappropriate material in order to score well on a test that supposedly measures “deep” knowledge and application (on a timed test, yet!) does not do justice to our students or our profession.”
About the Survey and for More Information
The Middlesex County Education Association Research and Advocacy Committee collected data from 1287 Middlesex County educators who voluntarily responded anonymously to an online survey between May 9 and June 12, 2016. Respondents work in all major districts in the county including East Brunswick, Edison Township, Highland Park, Middlesex, Milltown, New Brunswick, North Brunswick, Perth Amboy, Piscataway, Sayreville, South Amboy, South Brunswick, Spotswood, and Woodbridge Township. Elementary school teachers and professional staff represented over 38% of the respondents; middle school composed 32% of respondents; and high school participants were 29% of the total. Just under 9% have been in the profession for less than 5 years, 20% had 5 to 10 years in the profession, 43% have worked 11 to 20 years in the profession, and 28% have more than 20 years in the profession. This breakdown is generally representative of the profession as a whole.
Questions regarding this survey can be directed to Ellen Whitt, Middlesex County Education Association Research and Advocacy Committee, whitt.ellen@gmail.com or 732-771-7882.
Why am I not surprised?
I am not surprised either because of the following:
“Voluntary response bias. Voluntary response bias occurs when sample members are self-selected volunteers, as in voluntary samples. An example would be call-in radio shows that solicit audience participation in surveys on controversial topics (abortion, affirmative action, gun control, etc.). The resulting sample tends to overrepresent individuals who have strong opinions.”
What if the so called voluntary response bias turns out to be accurate?
If PARRC was so effective for student learning would it not have attracted a response bias that was positive?
I agree, voluntary bias could be positive or negative. One never knows for sure if by chance it is accurate. That is why it is called a bias. Therefore the results are statistically invalid.
Random sampling is required to obtain accurate predictions.
As if over-testing wasn’t bad enough. NJ teachers are constantly demonized and swift-boated by our bully (not in the good sense) governator. He bashes the public schools referring to them as failure factories, he has described the teachers as being greedy, selfish and not caring about the kids.
YES. It is with language like this that we (the public) have truly lost control over education.
Ridiculous. PARCC testing is perfect, as everyone knows.
This test alone will make every child college and career ready and that alone will make the US a World Leader in education. That’s what parents were told repeatedly, so it must be true. None of us dopes had any idea how our children were doing in school without this ranking system.
I have no doubt that should the same type survey be taken in all the states that participated in the PARCC that the results would be the same or very, very close the same. I know New Mexico the results would be the same. The problem is that the people who can do anything about this mess, Legislators and Governors, will turn a blind eye to the situation because it is not the politically correct thing to do and/or would be too hard for them to know what to do and act.
The test makers and sellers call tests objective measures and accuse people who are critical of them as want to avoid accountability (or worse). The testing industry has created what should be called a hostile work environment and a hostile learning environment, aided and abetted by elected officials and lobbyists. The lawyers who could make this case are not interested.
I wonder why we are having these repetitive discussions
Barbara Janey,
YEP! The invalidities that are educational standards and standardized testing have been known for years.
“Validity of PARCC Test Questionable and Students Impacted Negatively”
No, it’s not questionable. There is no validity whatsoever. COMPLETELY INVALID.
Noel Wilson proved as much almost 20 years ago in his never refuted nor rebutted dissertation, arguably THE most important educational writing of the last half century. To understand those invalidities read and comprehend
“Educational Standards and the Problem of Error” found at:
http://epaa.asu.edu/ojs/article/view/577/700
Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine.
1. A description of a quality can only be partially quantified. Quantity is almost always a very small aspect of quality. It is illogical to judge/assess a whole category only by a part of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as unidimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing the descriptive information about said interactions is inadequate, insufficient and inferior to the point of invalidity and unacceptability.
2. A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
3. Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
4. Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other words all the logical errors involved in the process render any conclusions invalid.
5. The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. And a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
6. Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
7. And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it attempts to measure “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self-evident consequences. And so the circle is complete.”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
Just too scary!
The “achievement gap” between the privileged and the “persistently disadvantaged” is worse than we thought. A bit off topic, but worth a read (NYT).
http://www.nytimes.com/2016/08/14/upshot/why-american-schools-are-even-more-unequal-than-we-thought.html?&hpw&rref=education&action=click&pgtype=Homepage&module=well-region®ion=bottom-well&WT.nav=bottom-well&_r=0
This shines a light on the elephant in the reform room. No, not all students can learn at the pace required by annual testing. Yes, some children are so cognitively damaged by poor pre-natal care, pre-natal abuse (drugs, alcohol, tobacco), poor nutrition, lead poisoning, and childhood trauma that they are physiologically incapable of learning like those who have been properly cared for. One day we will look back on this era of testing and hang our heads in shame for ignoring brain science.
In MD we are completely captured by PARCC nonsense. I have witnessed two years of testing at the high school level. Opt out was basically outlawed here so most of the higher skilled students performed their own type of opt out by answering randomly or putting there heads down upon their desks after 10 minutes of testing. The test is of course invalid and anti science with no peer review or transparency, but the results are even further invalidated by student disinterest. And so now what is Maryland’s response to this proven by evidence invalidity? We are using PARCC data to script our SLOs ( more invalid nonsense) so that we can base our instruction on DATA. As if this data has any real meaning at all! It is fake, fabricated, and invalid but all school improvement in this state MUST be based on DATA and PARCC is sacred. No wonder our school system is devastatingly short by several teachers in Math certified people alone.
We teachers in MD will now base our school year on this horrible, invalid, and anti scientific premise. So sad.
In NYS, opting out is an option in grades 3-8, but for high school, where a student must pass five Regents Exams in order to Graduate, our kids are stuck. The Board of Regents have been switching over the former Regents Exams into CC Regents which they have to prorate so some students can actually pass the thing.
I’m not sure what the goal is, but actual learning seems to be at the bottom of the list.