Laura H. Chapman left the following comment. The word “desperate” to describe this quest for a scientific, data-based means of judging teachers is mine. Something about it smacks of anti-intellectualism, the kind of busywork exercise that an engineer would design, especially if he had never taught K-12. This is the sort of made-up activity that steals time from teaching and ultimately consumes a lot of time with minimal rewards.
Chapman writes:
Please give at least equal attention to the 70% of teachers who have job assignments without VAMs (no state-wide tests). For this majority, USDE promotes Student Learning Objectives (SLOs) or Student Growth Objectives (SGOs), a version of 1950s management-by-objectives on steroids.
Teachers who have job-alike assignments fill in a template to describe an extended unit or course they will teach. A trained evaluator rates the SLO/SGO (e.g. “high quality” to “unacceptable” or “incomplete”).
The template requires the teacher to meet about 25 criteria, including a prediction of the pre-test to post-test gains in test scores of their students on an approved district-wide test. Districts may specify a minimum threshold for these gains.
Teachers use the same template to enter the pre-and post-test scores. An algorithm determines if the gain meets the district threshold for expectations, then stack ranks teachers as average, above or below average, or exceeding expectations.
1. The Denver SLO/SGO template is used in many states. This example is for art teachers—-Denver Public Schools. (2013). Welcome to student growth objectives: New rubrics with ratings. http://sgoinfo.dpsk12.org/
2. One of the first attempts to justify the use of SLOs/SGOs for RttT—-Southwest Comprehensive Center at WestEd (n.d.). Measuring student growth in non-tested grades and subjects: A primer. Phoenix, AZ: Author. http://nassauboces.org/cms/lib5/NY18000988/Centricity/Domain/156/NTS__PRIMER_FINAL.pdf
3. This USDE review shows that SLOs/SGOs have no solid research to support their use—-Gill, B., Bruch, J., & Booker, K. (2013). Using alternative student growth measures for evaluating teacher performance: What the literature says. (REL 2013–002). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Mid-Atlantic. http://ies.ed.gov/ncee/edlabs.
4. The USDE marketing program on behalf of SLOs/SGOs—-Reform Support Network. (2012, December). A quality control toolkit for student learning objectives. http://www2.ed.gov/about/inits/ed/implementation-support-unit/tech-assist/slo-toolkit.pd
5. The USDE marketing campaign for RttT teacher evaluation and need for district “communication SWAT teams” (p. 9) —- Reform Support Network. (2012, December). Engaging educators, Toward a New grammar and framework for educator engagement. Author. http://www2.ed.gov/about/inits/ed/implementation-support-unit/tech-assist/engaging-educators.pdf
6. Current uses of SLOs/SGOs by state—-Lacireno-Paquet, N., Morgan, C., & Mello, D. (2014). How states use student learning objectives in teacher evaluation systems: a review of state websites. Washington, DC: US Department of Education, Institute of Education Sciences. http://ies.ed.gov/ncee/edlabs/regions/northeast/pdf/REL_2014013.pdf
7. Flaws in the concepts of “grade-level expectation” and “a year’s worth of growth” —-Ligon, G. D. (2009). The optimal reference guide: Performing on grade level and making a year’s growth: Muddled definitions and expectations, growth model series, Part III. Austin, TX: ESP Solutions http://www.espsolutionsgroup.com/espweb/assets/files/ESP_Performing_on_Grade_Level_ORG.pdf
As a person who has really done science, “scientific teacher evaluation” is an oxymoron of the first order. “Faith-based” might be the appropriate descriptor.
Science is about collecting evidence and even to try to poke holes in ideas to see if they stand up. Try that idea with VAM.
Well said, Peter.
Measuring people is like counting how many angels can fit on the head of a pin. Wonder if Gates will drop 50 mil on that study?
My school board rep can’t fathom why I, as an engineer myself, don’t favor standardized tests, VAM, etc. I tell her it’s because as an engineer, I understand that any measurement is only as good as the quality and relevance of your data and the quality and precision of the instruments you have to measure it with. Measuring teachers using VAM is like trying to score a car’s overall performance using only a stopwatch and a tape measure. With those as your only tools, you’re just as likely to determine a Hyundai is better than a Corvette than the opposite. It’s beyond stupid.
Well said, Jack!
Thanks Jack! And nnot only is it beyond stupid… but it actually prevents teachers from teaching. Next year I am likely to have to establish a particular baseline which wastes a lesson and will have to single out a target group of kids which I must TRACK for the year and prove that they have learned and my evaluation depends on it. Given that I am a specialist, work title one and will be consistantly having to interrupt the flow of classes to accommodate an ever-increasing number of tests… the whole thing wreaks of control while denigrating actual learning! Classroom teachers have been doing the SLO’s this year and are completely bewildered as to WHY and are having to waste vaste amounts of learning time on this. Love the tape measure and stopwatch analogy. Mine as well measure learning by true student growth (as in their height at the start of the year, mid year and at the end of the year).
oh.. menat to include that this proof of learning must be through “hard data”.. creation of a test to be administered at specified intervals. Students in title one schools come and go! Students have scattered attendence in title one schools. Students are constantly pulled for ESL or testing etc… at title one schools… Classtime is always interrupted. This coupled with a forced style of pseudo assessment tied to “teacher evaluations” is disasterous to student learning
SLOs came out of the Wisconsin “Educator Effectiveness Design Group” as a clear alternative to standardized testing, with an emphasis on presentations, portfolios, and classwork (but with “district tests” included). In the 3 years since the design group, the guidance from the Department of Instruction has increasingly emphasized the test-based options. Next year is the first year the system will be required. I am a school board member in Madison, WI trying to push back and at least with this portion of the system make sure we use real things, like presentations, portfolios, and classwork. As you might imagine, there is much working to make this difficult. One of the most insidious things is the reality that writing, approving, monitoring test-based SLOs is much easier and less time-consuming. That is seductive for many involved, including the teachers.
It’s a work in progress, but I suggest you take a close look at the most current version on the Still Developing/In Discussion guidance document.
Click to access StillDeveloping.pdf
SLOs came from Denver around 2004 and were part of a scheme to get pay-for-performance installed. That Denver project was privately funded and included a “study” conducted by the Community Training and Assistance Center in Boston.
The SLO process was later piloted in North Carolina with a report from the same group. See Community Training and Assistance Center. (2013). It’s more than money: Teacher Incen-tive Fund—Leadership for educators’ advanced performance. Charlotte- Mecklenburg Schools. Boston: Author.
The portfolio process is being tried out in Tennessee, in the arts. Teachers upload digital examples of student work. These are rated by two evaluators, with a third called in if there is not consensus. There is still the architecture of a “pre-test and post-test” in this process, and ranking of teachers in relation to gains achieved, routinely mislabeled “growth.”
It’s hard for me to imagine any valid means of evaluating teachers other than having a few master teachers observe another teacher multiple times. But few administrators are master teachers themselves, and the rubrics they use do not substitute for good judgement. Let administrators prosecute the grossly incompetent and give up on the benighted quest to precisely evaluate those they are most likely unqualified to judge. If we spent half as much energy building excellent curricula (read: meaty, content-rich, exquisitely crafted, multiply-drafted lessons and units) as we do fretting about grading teachers, we’d have huge gains in student achievement. Currently the curricula are often shabby –SO much work needs to go into building a great curriculum. So much of ed reform is a shameful distraction from this great project. We fiddle while Rome burns.
And then there is New York –
Politics and predetermined decisions aside – the Regents Task Force for APPR spent months reviewing and making recommendations for the evaluation system. Some recommendations were accepted – some not – but the final product offered options for local districts as long as they were negotiated between the district and union. Nothing would change the use of the single test score but the other options were just that – options:
60% based on observations and review of artifacts by trained lead evaluators
20% based on standardized test scores (uggh) for teachers in those subjects and a “growth measure” based on a locally developed/selected test for teachers in non-tested subjects (ex. music, kdg., electives, etc.)
20% based on “local assessments” (not growth) for all teachers (ex. a district essay, district developed math benchmark assessments, or off the shelf assessments.
Like it or not – again – at least there were options that could keep the stakes lower.
NO mention of the dreaded SLOs in any of this. (Full report below – SLO’s don’t appear)
Click to access RegentsTaskforceonTeacherandPrincipalEffectiveness.pdf
Then out of the blue come SLOs in the regulations.
So what’s the difference between the growth measures in the law and report and the SLOs?
Because of the technical specificity and growth component, some districts developed dozens upon dozens of pre-tests in every subject area, logically kids bombed out on the pre-tests in class after class (not the best way to start the school year), kids were told tests were part of the teacher evaluation, pressure on kids and teachers ensued, curriculum narrowed to focus on the objectives, and suddenly every assessment was viewed as “standardized testing.”
Tail wagging the dog.
So – bad enough that the state tests were singly part of the evaluation – the SLOs took on the same high-stakes nature.
The pre tests scared some students right out of the class. Once the kids took the physics pretest (with questions from the Regent’s exam), they decided that the course was just too hard, so they dropped it.
I think of SLOs like using the Magic 8 Ball. Oh wait – answer fuzzy, come back later.