Give it up, reformers. The scores on the ACT are flat from 2010-2014, despite the billions wasted on testing, test-based teacher evaluation, and merit pay. Your reforms have reformed nothing. They have failed. Pay attention.
Improve the lives of children and families. Improve working conditions in the school. Demand equitable resources for schools. Reduce class sizes for needy children. Do what works. Throw your punishments and sanctions into the ash-heap of history. It will happen sooner or later.
Start now to build the structures that work for students and teachers.
FairTest_______________________
National Center for Fair & Open Testing
for further information:
Bob Schaeffer (239) 395-6773
cell (239) 699-0468
for use with annual ACT scores on or after Wednesday, August 20, 2014
STAGNANT ACT SCORES SHOW TEST-DRIVEN U.S. SCHOOL POLICIES
HAVE NOT IMPROVED COLLEGE READINESS,
EVEN WHEN MEASURED BY OTHER TESTS
Another year of flat scores on the ACT, the nation’s most widely administered college admissions exam, provides further evidence that a decade of test-driven public school policies has not improved educational quality.
Reacting to ACT scores released today, Bob Schaeffer, Public Education Director of the National Center for Fair & Open Testing (FairTest) said, “Proponents of ‘No Child Left Behind,’ ‘Race to the Top,’ ‘waivers,’ and similar state-level programs promised that focusing on testing would boost college readiness while narrowing score gaps between racial groups. The data show a total failure according to their own measures. Doubling down on unsuccessful policies with more high-stakes,
K-12 testing, as Common Core exam proponents propose, is an exercise in stubbornness, not meaningful school improvement.” (see http://fairtest.org/common-core-assessments-factsheet)
Stagnant scores and racial gaps have also been reported on the federal government’s National Assessment of Educational Progress (NAEP) and the SAT college admissions test.
Schaeffer continued, “The lack of progress toward excellence and equity will provide further ammunition for the country’s growing testing resistance and reform movement. Ending the counter-productive fixation on standardized exams is necessary to create the space for better assessments that actually enhance learning and teaching.” FairTest actively supported this past spring’s opt-out campaigns and other protests that focused attention on testing overuse and misuse.
FairTest is also a national leader for test-optional higher education admissions. More than 830 accredited, bachelor-degree granting colleges and universities now do not require all or many applicants to submit SAT or ACT scores (see http://fairtest.org/university/optional). Eight more schools – Wesleyan University, Old Dominion University, Hofstra University, Temple University, Montclair State University, Beloit College, Bryn Mawr College and Emmanuel College — dropped test-score requirements already this summer. In addition, Hampshire College, which long was test-optional, is now “test-blind.”
– – 3 0 – –
2014 COLLEGE-BOUND SENIORS AVERAGE ACT SCORES
1,845,787 million test takers
COMPOSITE SCORE FIVE-YEAR SCORE TREND
(2010 – 2014)
ALL TEST-TAKERS 21.0 0.0
African-American 17.0 + 0.1
American Indian 18.0 – 1.0
Asian 23.5 + 0.1
Hispanic 18.8 + 0.2
White 22.3 0.0
Source: ACT, The Condition of College & Career Readiness 2014
Yes, billions for tech devices and for standardized testing, and the test results don’t change. Keep banging this drum until parents get the message, which may make them more likely to opt out and scuttle the ridiculous PARCC barreling down on us. Mass media won’t profile this dreadful failure of the testing/tech juggernaut, so we’ll have to do it ourselves, network by network.
I am putting this here … but it may need to be elsewhere … here is what MY district posted. I am deleting names of schools and the superintendent … but it is verbatim, otherwise. I might add that we are now putting 30 kids in many of the elementary classrooms. The rooms were build to hold 24.
Superintendent’s Message
I am pleased to announce that the ____________School District has once again earned high ratings on the 2014 Ohio Department of Education District Report Card. Our district received all A and B grades on the major components of the report card. These ratings are a testament not only to the outstanding individuals who work with our children each day, but also to the parents and other community members who support our schools in many ways. The __________School District is blessed to have hard working students, committed staff members, exemplary administrators, and a supportive community. It takes everyone working together to achieve this high level of recognition. Let’s continue the trend in the 2014 – 2015 school year!
Our staff is working diligently to prepare for the new Ohio learning standards and assessments that will be put in place this school year. The new state assessments will be much more rigorous for our students than the past state tests and will be conducted online. Our teachers have written new courses of study for each subject area to reflect the new standards. We have also upgraded our internet connection and installed wireless access points in every classroom in the district to prepare for the online testing. Information from the Ohio Department of Education indicates that the passing percentages on the new state assessments are anticipated to be lower than those in the past. I assure you that the _______ educational team will do everything in their power to have our students prepared to do their very best on the assessments.
The completion of the additions and renovations at ________Elementary School this summer marks the end of the district’s 57 million dollar Facilities Improvement Plan. Begun with the passage of a bond issue in 2002, the plan has allowed the district to build a new high school and place additions on and totally renovate the other school buildings. The community should be proud of providing 21st Century learning spaces for their children.
The district will also be able provide a 21st Century learning experience for our students through the award of a Straight A Fund Grant from the Ohio Department of Education in the amount of $949,987. This grant titled Personalization through Digital Age Teaching and Learning will provide every student in grades 5-12 a Chromebook laptop computer. Learning will be transformed through student utilization of digital resources to provide a more personalized online learning experience. In addition, formative assessments will be used by teachers to select resources and activities based on students’ identified needs. Reductions in spending for textbooks, paper, copier and printer maintenance, along with the need to no longer maintain general use computer labs will provide funds to purchase new devices every four years.
The time has come for the district to establish a new District Improvement Plan. I plan to set up a series of meetings to receive input from our parents and community members regarding the performance of the district to help establish our new three year goals. I hope each of you will consider participating in this process.
As always, if you have any questions or concerns regarding any aspect of the operation of the _____ School District, do not hesitate to contact me. I pledge to return your call or email within 24 hours. I wish you and your children a very successful 2014 – 2015 school year. I look forward to seeing each of you at building and district events.
But David Coleman is changing the ACT to align with Common Core… so scores will go up and he will look like a hero
David Coleman is president of the College Board, which writes the SAT. The ACT exam is a different exam run by different people.
Are they talking about raw or scaled scores? And if the latter, are the scaled scores normed scores? After a web search, I couldn’t find out whether the scaled scores are normed or not. It seems like they might be, as they are supposed to be consistent across versions of the test whereas the raw scores are not. *If* that is true, this is akin (if not identical to) complaining that the median score continues to hover around the 50th percentile.
I think tests should just be used to measure progress. Screw all of this pass/fail business. That’s it. A kid has to show growth, and that is all.
How would you define “growth?” VAM has been shown to be extremely unreliable, and it supposedly defines growth.
What’s VAM?
VAM looks to measure the effectiveness of teachers using changes in student scores on standardized tests.
What does VAN stand for?
Value Added Measures. Originally developed by an economist to determine crop yields, and now twisted into supposedly evaluating how much “value” a student has added to him or her by his or her teacher during the year.
A link on some of the studies:
http://www.ascd.org/publications/educational_leadership/may10/vol67/num08/Using_Value-Added_Measures_to_Evaluate_Teachers.aspx
VAM = Vacuous And Moronic
MathVale: ¿?
I thought VAM stood for “Value-Addled Modeling/Measurement.”
😳
But there is a 98% “satisfactory” [thank you, Bill Gates!] chance of certainty that you are correct.
Go figure…
😎
So much for the Corporate Deformers and decisions being “Data Driven” – right off a cliff.
As it is being used, perhaps it should be “Data Drivel” . . .
Here’s a post on Plunderbund sharing a letter of a person who ALMOST became a teacher for a StudentsFirst School who realized she would NOT be part of it.
Great Letter!
http://www.plunderbund.com/2014/08/19/ohio-teacher-turns-down-studentsfirst-in-spectacular-fashion/
If you asked most high schools, they would view maintaining a stable average ACT composite score over five years while participation increased 17.7% as a sign of success, not failure.
Excellent point.
Ahhhh. That is the point … the EduReformers do not CARE what high schools think … and increasingly … what colleges and universities think. Forget teacher opinions. We are so far down the clout totem pole that we just deal with the mud splashed in our faces from the rains.
Deb,
The point here is that the additional students taking the exam were probably drawn from replete lay less well prepared high school students. If everyone performed the same as before, the mean and median score should have dropped.
Okay …. that wasn’t MY point.
Deb,
That was, I thin, the point of the person whose post you were commenting on. It was somewhat technical, so I thought an explanation might be helpful.
I generally prefer to keep my answers experiential. Some appreciate it, others don’t. I like reality checks, applications, experiences, and ground level truths. I mean, to me, the reality is what happens/ed. The manufactured studies done simply cloud the truth of the way things come down in the classroom. They can use their magic data charts to “prove” what should be, but that is generally far from what any of us experience.
Deb,
The problem with experiential knowledge is that any individual has is a limited set of experiences.
I know that. I merely wish to add the perspective of a teacher in layman”s terms. If you wish to discount that, it is your right. I have found that most parents find too much so-called professional jargon to be off-puttung. If we need to look up additional “papers” google is right here. Again, don’t read what you find worthless. I do that often.
Also, it brings out the fact that IN THE CLASSROOM we experience these things in all states. We need to be able to express universal objections to what this is doing to students, teachers, and careers.
It is rather a limited and qualified “excellent point.”
In the nine states that require all students to take the ACT, scores have been consistently flat for the last four or five years. These states have NOT seen significant increases in ACT participation.
TheMorrigan,
And the states in which additional students have taken the ACT have seen no decrease in ACT scores. Do you want to defend the position that the new students taking ACT exams are just as likely to score a 36 as the students who typically took the ACT scores before?
Just providing perspective, TE.
FYI, I understood Stiles’ point the first time around. And I DO agree with it–to a point (I never said I didn’t. Why did you assume that I did not when my content speaks otherwise?). But we should also be aware that there is also something in those nine states that test all their students, have not seen increases in participation, and have not seen any growth. That is why I started with “It was a rather limited and qualified “excellent point” because the truth is a little more complicated than the whole that you and Stiles imply with this “excellent point.”
Right?
TheMorrigan,
My comments are not often viewed with much charity here, so you might forgive a somewhat defensive response. Nine out of the fifty states (I don’t know if D.C. Counts here) is, perhaps, a small qualification.
Since there are relatively fewer changes and variables in those nine states for the last four years, it could be argued that they are better gauges–tests–for determining a particular program’s success or failure or somewhere in between. But perhaps that is just my biased perspective, eh?
I guess I can see how looking at the whole map and adding in theoretical possibilities and apologies might be more complex and scientific to a more astute mind.
TheMorrigan, I agree that the reality is more complex. I read the Fairtest statement, saw the increase in participation and wanted to point that out. I dislike it when reformers present a slant on data to present a picture of failure. I respect much of Fairtest’s work, but I don’t like it when they also present a slant on data to say that there is no progress in the public school system. That was my reaction.
You mentioned the states where all students take the ACT and I think looking at them is instructive. In 2010, there were seven states that tested all or almost all of their graduates (96% or more). Here are their average composite scores from first the 2010 ACT Condition of College and Career Readiness Report and the the 2014 report.
Colorado 20.6, 20.6
Illinois 20.7, 20.7
Kentucky 19.4, 19.9
Louisiana 20.1, 19.2 (!)
Michigan 19.7, 20.1
Mississippi 18.8, 19.0
Tennessee 19.6, 19.8
Wyoming 20.0, 20.1
Quite a mixed picture.
Stiles,
It seems to me that the students taking the state mandated ACT tests likely fall into two broad populations: students taking the test because they wish to continue their education and have an incentive to do well on the exam and students who are taking the exam because they were told to take it and have little incentive to do well. Getting teenagers to try on no stakes exams is near impossible, as Dr. Ravitch has explained.
This mix in the population of test takers makes it difficult to understand the results in those states. States where ACT tests are taken voluntarily might be presumed to only have motivated students taking the test.
VAM = Value Added Metric … a SUPPOSED year’s growth exhibited by all students in a grade level. They are to show that they gained a year’s growth prior to the end of the school year, due to the dates of the tests being administered. They score the tests (which aren’t not necessarily indicative of a year’s more difficulty than the previous year’s). If the student was advanced in the 3rd grade, they are expected to be advanced in the subsequent years. If they aren’t, it is the fault of the teacher. NOTHING else matters. The child could be in the midst of an adoption, an illness, a divorce, poverty, hunger, you name it … it has to show a year’s growth or the teacher sucks according to this stupid idea. Kids move in and out, kids go on vacation the weeks before the test, kids break arms, kids get allergies or illnesses. They must fall in line. They miss practice. They may be scared to deah on the day of the test. Nothing matters. Furthermore, if a kid is proficient, they must remain proficient. If they are basic, they must be basic or higher. Each time they do increase, the next year’s score has to be a full year’s growth or else. If a child has a great day on his/her 3rd grade test, he/she had better be on the top of the game for the rest of their lives. It can happen. It isn’t guaranteed. And, the teacher isn’t totally responsible.
Deb: while there are some fine works available re standardized testing, the most recent and in-depth, go-to book for VAM is Audrey Amrein-Beardsley’s RETHINKING VALUE-ADDED MODELS IN EDUCATION: CRITICAL PERSPECTIVES ON TESTS AND ASSESSMENT-BASED ACCOUNTABILITY (2014).
No disrespect to the author, but this is not for the completely uninformed or faint of heart. I would recommend the following [in the order listed] before starting on the above:
A), Todd Farley, MAKING THE GRADES: MY MISADVENTURES IN THE STANDARDIZED TESTING INDUSTRY;
B), Banesh Hoffman, THE TYRANNY OF TESTING;
C), Noel Wilson [see Duane Swacker on various threads of this blog including today’s); and
D), Daniel Koretz, MEASURING UP: WHAT EDUCATIONAL TESTING REALLY TELLS US.
Given some prior acquaintance with the above and similar works—I do not write this lightly—Amrein-Beardsley’s work leaves the reader with the conclusion:
“There is no there there.” [Gertrude Stein]
😎
P.S. She also has an online blog; google “VAMboozled!” for the website.
“FAIRTEST: Flat ACT Scores Show Failure of NCLB, Race to the Top”
NO! The scores do not “show” anything. It is the people around those scores who determine the supposed meaning, much like the Oracles of Delphi interpreting what the gods had to say, or phrenologists determining a persons psychological makeup or astrologers divining the future.
To quote TE, “there’s nothing there” in those scores and therefore interpretations. Or as I say it’s all a bunch of mental masturbation-it may be pleasant but it isn’t the real thing.
The many epistemological* and ontological** errors of process in developing standardized tests and educational standards render the whole process COMPLETELY INVALID resulting in any conclusions/interpretations, as Noel Wilson states, “VAIN AND ILLUSORY”. To understand why read and comprehend Wilson’s never refuted nor rebutted complete destruction of those educational malpractices in “Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700
Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine. (updated 6/24/13 per Wilson email)
1. A description of a quality can only be partially quantified. Quantity is almost always a very small aspect of quality. It is illogical to judge/assess a whole category only by a part of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as unidimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing the descriptive information about said interactions is inadequate, insufficient and inferior to the point of invalidity and unacceptability.
2. A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
3. Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
4. Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other word all the logical errors involved in the process render any conclusions invalid.
5. The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. And a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
6. Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
7. And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it attempts to measure “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self evident consequences. And so the circle is complete.”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
*Epistemology (from Greek ἐπιστήμη – epistēmē, meaning “knowledge, understanding”, and λόγος – logos, meaning “study of”) is the branch of philosophy concerned with the nature and scope of knowledge[1][2] and is also referred to as “theory of knowledge”. It questions what knowledge is and how it can be acquired, and the extent to which any given subject or entity can be known. (from Wiki)
** Ontology is the philosophical study of the nature of being, becoming, existence, or reality, as well as the basic categories of being and their relations. Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology deals with questions concerning what entities exist or can be said to exist, and how such entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences. (from Wiki)
A real simple way to say the same thing is that there are too many variables for VAM to be valid. It is not a meaningful way to assess a teacher’s impact on students. It belongs in the dumpster with other “junk science.”
xxxxxxxxxxxxxxxxxxxxx While I understand the temptation to use a tool of deformers (stdzd test scores) against them, I’m not sure what to make of flat ACT scores.
If ACT (or SAT) scores were an accurate measure of ‘college readiness’, presumably a college application would consist only of that score. Yet I’ve read that there is an increasing number of colleges which prefer to base admission decision on GPA, essay, et al, ignoring the sat/act.
It’s easier to buy into this argument using NEAP scores, whose history of slow but steady increase was supported by similar increases in other stats like % grad’n et al– & which also flattened corresponding to NCLB/RTTT. Plus NEAP, given in 4th, 8th, & 12th, focusing on long-term trends, seems a more thorough measurement.
I have lost respect for standardized tests but feel no hesitation in using them against phony reformers. If they say they are the measure of all things, then they darn well should produce them. But if their “reforms” don’t produce what they promise, then what was all that disruption for? Just fun and games for the reformers, playing with people’s lives and careers.
dianeravitch: I heartily concur.
Their own “best arguments” about hard data points and numerical clarity and mathematical certainty are pure spin. As soon as one touches graduation rates or VAM ratings or standardized test scores and the like, one gets a lesson in what the modifier “squishy” means.
In fact, their own numbers and stats are often the very best arguments to use against them. Which explains why they are so very reluctant to be open and transparent about their data and how they use and interpret their own figures.
Of the entire charterite/privatizer movement it can be said that—
“He uses statistics as a drunken man uses lamp posts — for support rather than for illumination.” [Andrew Lang]
Thank you for your comments.
😎
SF and F,
At my institution ACT scores do a better job of predicting college grades than high school grades for Pell eligible students. Perhaps this is because test scores play no role in admission and the range of ACT scores is much wider than high school gpa.
By Eric Kangas, Independent Researcher & retired Instructor; ekangas@juno.com;
Some California State K-12 proficiency research explaining why NCLB, ACT, and SAT scores remain flat, currently and possibility in the future, as well..
.
A more complete paper of six pages was reduced for the blog. The important point is that unless the student is promoted by proficiency, especially in the math and English areas throughout the grades, K-12, the student will generally perform poorly on any standardized test that evaluates learning / achievement including the NAEP, SAT or ACT. It is also insightful that the California State K-12 data(see Tables 1 & 2) suggests that the rate determining step, especially in the math discipline is the fourth grade, since the percent of proficiency for all groups and the composite declines from grade five through twelve. The State K-12 data also suggests that testing and school programs that do NOT empirically address- via appropriate entry level placement with effective learning models- so that “sufficient direct instruction” on the material being taught occurs, student learning will be very limited on any standardized test. Research documents that student placement by proficiency on the prerequisite skills on entry to the courses maximizes student learning and achievement.
.
A very simple, inexpensive, and effective model to accomplish the proficiency goal for students is to place and promote students by proficiency level INDEPENDENT of attendance. For example, a student could be in fourth grade by attendance, but could be placed into levels 1,2,3,4,5,6 by achievement, based upon the teacher’s recommendation or testing. A useful start is to develop a pilot program beginning at a class or school level. Then, give the parents and the students the CHOICE of participating in the program that promotes students by proficiency level independent of grade or not. Each program level would not necessarily be equivalent to a yearly grade, but could be by grade period or semester. In the model, the student would continue to be promoted by attendance each school year, as currently done in K-12. The model would have no remedial, nor advanced courses, but only one series of courses in each discipline. In addition, the cost/student and discipline problems would decrease and the class size could increase( reducing costs), as the responsibility of learning is placed on the student instead of the teacher/school. Research suggests that this learning and achievement model would significantly increase student achievement regardless of social-economic- ethnic, age, and gender conditions. Research claims available from ekangas@juno.com.
TITLE: California K-12 STAR 2012 data, graphs, & equations for four ethnic groups and composite. Note: The graphs and equations were omitted for the blog, but are available.
DISCUSSION: California’s K-12 NCLB evaluation system titled, STAR, provides students, parents, educators and the public a comprehensive, five level performance view of student, class, school, district and state achievement for all grades 2 to11 in many academic areas including the required math, English, and science areas, as mandated by the NCLB law. The STAR test results were readily available to teachers, parents, administration, and school board members directly and to the general public via the Internet. Unfortunately, the STAR promotion by proficiency requirement for each student in math and English was largely ignored by the K-12 educators, local school board members, the news media, politicians, and the general public. The NCLB law and the NAEP( our nation’s report card) define five levels of evaluation from high to low. However, the major requirement standard focus and importance of the law is that student must be promoted from grade to grade by proficiency on an increasing incrementally yearly scale. The result was that all students must be promoted by proficiency in math and English disciplines by 2013-2014 time line or face punitive actions. Unfortunately, the California State Department of Education, State Legislature, and the governor terminated the STAR(NCLB) evaluation model to avoid the possibility of punitive legal actions.
.
Several traditional public schools at the elementary and middle school have come close to achieving this promotion by proficiency goal for all students such as Deer Canyon Elementary and Mesa Verde Middle School in Poway School district in 2012 with the percent of students at proficiency values and higher in the mid 90 % area in both the math and English disciplines. Obviously then, the high rate of school proficiency performance of these schools suggests that the NCLB proficiency requirement of the law is possible in the public schools. However, the STATE data Tables # 1 and # 2 for both math and English strongly documents that the actual percent of students achieving proficiency is NOT occurring and actually incrementally decreasing for grades 5-12. The decrease in the percent of students for grades five and above suggests that no non- proficient student will achieve proficiency without effective intervention in the higher K-12 grades. Therefore, grade four empirically defines the “rate determining step” for proficiency in the K-12 schools. Since the applied pre-requisite skills in math increases with the complexity of the discipline, the resulting decrease in student’s proficiency percentage with increasing grades are reasonable. In K-12 education, three(1998) California State and one federal(NCLB 2002) proficiency laws are currently being violated and possibly subject to legal punitive actions by interested parties.
Listed below are the Tables 1 (math) and 2(English) that evaluate California‘s STAR(NCLB) student performance by proficiency for the 2012 year.
TABLE 1: Math proficiency rate in 2012 in California’s STAR test for various ethnic groups and composite. Note: The last five rows evaluate the desired PROFICIENCY GAP( validity) for each grade and not the racial, achievement gap(reliability) value!.
Grades 2 3 4 5 6 7 8 9 10 11 12**
Composite 64% 69% 71% 65% 55% 55% 46% 39% 27% 24% 17.4%
Asians 85% 89% 90% 87% 83% 83% 78% 67% 59% 54% 51.4%
White 77% 81% 80% 75% 69% 69% 58% 46% 36% 32% 25.5%
Latino 55% 62% 63% 57% 44% 45% 46% 26% 16% 13% 6.6%
Black 49% 56% 56% 50% 38% 39% 30% 19% 13% 11% 1.6%
Composite Gap to Proficiency 36% 31% 29% 35% 45% 45% 54% 61% 73% 76% 82.6%
Asians Gap to Proficiency 15% 11% 10% 13% 17% 17% 22% 33% 41% 46% 48.5%
White Gap to Proficiency 23% 19% 20% 25% 31% 31% 42% 54% 64% 68% 74.5%
Latino Gap to Proficiency 45% 38% 37% 43% 56% 55% 54% 74% 84% 87% 93.4%
Black Gap to Proficiency 51% 54% 54% 50% 62% 61% 70% 81% 87% 89% 98.4%
**Grade 12 values were projected from K-4 to K-11 data to meet the NAEP evaluation criteria for graduation.
TABLE 2: English proficiency rate in 2012 in California’s STAR test for various ethnic groups and composite. Note: The last five rows evaluate the desired PROFICIENCY GAP( validity) for each grade and not the racial, achievement gap( reliability)value!
Grades 2 3 4 5 6 7 8 9 10 11 12**
Composite 58% 48% 67% 63% 59% 62% 59% 57% 50% 48% 46%
Asians 81% 71% 75% 82% 81% 82% 81% 79% 74% 69% 66%
White 73% 66% 82% 76% 77% 79% 75% 74% 67% 63% 59%
Latino 48% 35% 37% 50% 48% 50% 48% 45% 39% 36% 33%
Black 48% 37% 56% 50% 48% 50% 47% 43% 37% 33% 29%
Composite Gap to Proficiency 42% 52% 33% 37% 41% 38% 41% 43% 50 52% 54%
Asians Gap to Proficiency 19% 29% 25% 18% 19% 18% 19% 21% 26% 31% 34 %
White Gap to Proficiency 27% 34% 18% 24% 23% 21% 25% 26% 33% 37% 31%
Latino Gap to Proficiency 52% 65% 63% 42% 52% 50% 52% 55% 61% 64% 67%
Black Gap to Proficiency 52% 63% 44% 42% 52% 50% 53% 57% 63% 67% 71%
** Grade 12 values were projected from K-4 to K-11 data to meet the NAEP evaluation criteria for graduation.
Note: All students by the 2013- 2014 school year are mandated to be proficient by three State & NCLB law.