Laura H. Chapman offered the following comments about Ohio’s shell game of assessment. Among other troublesome issues, Ohio will encourage “shared attribution” for evaluating teachers; that means that teachers who do not teach tested subjects will be assigned a rating based on the scores of students they do not teach.
In Ohio, the State Superintendent of Public instruction, Dr Ross, has a request in to Governor and the legislators to lighten the testing load. The “Testing Report and Recommendations” ( January 15, 2015) includes some cockamamie statements about the purposes of tests, along with some revealing stats.
Among these highlights are there. Ohio students in grades K-12 spend about 19.8 hours a year taking tests on average. Ohio students spend approximately 15 additional hours practicing for tests each year.
A chart on page 5 shows that Kindergarten students are tested for 11.3 hours on average, and grade 1 students 11.6 hours on average. These are the lowest times. Add the test prep for a total of 26.3 hours and 26. 6 hours respectively for testing. That is slightly more than the time allocation for elementary school instruction in the visual arts in the era before test-driven policies determined everything about K-12 education.
The highest testing times are in grade 3–28 hours, and at grade 10–28.4 hours, not counting the test prep. The spike at grade 3 is from Kasich’s guarantee–“read by grade three” or repeat the whole grade. Dr. Ross wants to cut out some of the current test time for reading (about four hours) by letting grade three teachers do those super high stakes at will, more than once if necessary, with a summer grade three test being decisive for students who have not passed muster earlier. This strikes me as a shell game, not really a reduction but an increase for students who are still learning to read.
This report also recommends that testing time be reduced by cutting tests for SLOs. “Eliminate the use of student learning objective (SLO) tests as part of the teacher evaluation system for grades pre-K to 3 and for teachers teaching in non-core subject areas in grades 4-12. The core areas are English language arts, mathematics, science and social studies.”
“Teachers teaching in grades and subject areas in which student learning objectives are no longer permitted will demonstrate student growth through the expanded use of shared attribution, although at a reduced level overall. In cases where shared attribution isn’t possible, the department will provide guidance on alternative ways of measuring growth” (p.10).”
This obscure language about the expansion of “shared attribution” as a way to measure student learning is not clarified by the following statement (pp. 10-11).
”…when no Value-Added or approved vendor assessment data is available, the department gives teachers and administrators the following advice.
First, educators should not test solely to collect evidence for a student learning objective. The purpose of all tests, including tests administered for purposes of complying with teacher evaluation requirements, should be to measure what the educator is teaching and what students are learning.
Second, to the extent possible, eliminate the use of student learning objective pre-tests. When other, pre-existing data points are available, teachers and schools should use those instead of giving a pre-test.” (pp. 10-11).
The convoluted reasoning and ignorance about testing is amazing. “The purpose of all tests, including tests administered for purposes of complying with teacher evaluation requirements, should be to measure what the educator is teaching and what students are learning.” Student tests are not direct measures of what teachers are teaching. Many tests document what students have or have not learned beyond school. Compliance with legislative mandates means you can ignore undisputed facts and sound reasoning about testing.
In the proposed policy, teachers who do not receive a VAM based on scores from PARCC tests (ELA and math) and/or tests from AIR (science and social studies) or from some other VAM-friendly standardized test from an “approved vendor” are asked to get used to the idea of “sharing scores” produced by students and teachers of subjects they do not teach and state-wide scores processed through the VAM calculations. There is no evidence these tests are instructionally sensitive, meaning suitable for teacher evaluation. The state approved tests seriously misrepresent student achievement, especially those from PARCC, because those tests assume learning of the CCSS have been in place, fully implemented, with cumulative learning from prior years.
SLOs and the district-approved tests for these appear to be dead (or dying) in Ohio, not because they were seriously flawed concepts from the get-go, but because those tests took longer to administer on average than others. The “loud and clear” demands for less testing are most easily met by cutting the SLO tests (those usually designed by teacher collaboration) in favor scores allocated to teachers under the banner of “shared attribution.”
Like many other states where governors and legislators are trying to micromanage teachers, there is an unconscionable insistence that any data point is as good as another, that tests are “objective,” and that junk science marketed as VAM is not a problem.
Unfortunately, all of the talk about “high quality” this and that does not extend to expectations for fair, ample, and ethical portrayals of student and teacher achievement.