In displaying readiness for college, grade point average matters more than a score on a college admissions test like SAT or ACT. Even the testing companies acknowledge that this is the case. But they are businesses, and they compete with one another for numbers and dollars. So they are always on the lookout for new avenues by which to serve their customers (the colleges, not the students).
The ACT, Mercedes Schneider reports, will offer a new service to colleges (not to students). It will not only test the student, but it will give the college confidential advice about his or her readiness, based on subtest scores. This information will go to the college, but not to the student.
Schneider writes:
Thus, ACT is intentionally shifting its role from reporting test scores to advising postsecondary institutions regarding admissions decisions.
There’s more:
Students will not be privy to the advice ACT is offering regarding ACT’s predictions of student success. None of this info will be part of the student score report. Such info will be between ACT and postsecondary institutions.
And not only does ACT believe it has a right to both form and communicate its opinions of student success to colleges and universities; ACT is fine with forming some of its judgments based upon unverified, volunteered student self-report information.
So, get this. The students pay to be tested; ACT reports the results to the students and to colleges. But then ACT gives the colleges information about the students and recommends whether or not they should be accepted. This advice is not shared with the students who paid to be tested.
Does this strike you as outrageous? ACT is not your guidance counselor. What nerve!

It makes perfect sense in a country where you can be on all sorts of watch lists, yet you have no right to know which lists, if any, you’re on, and if you try to sue for damages, you’ll be told you have no standing because you don’t know that you’re on the list. And yet most Americans just shrug their shoulders. When’s the next Bachelorette?
LikeLike
“. . . Bachelorette”???
Is that a TV program?
LikeLike
Yes, that along with Honey Boo Boo seems to be what most Americans care about as our country crumbles around us. Present company excepted.
LikeLike
Thanks, Dienne.
I’m a little out of the loop when it comes to TV as I’ve never been one to watch all that much. I will admit to being a sports fan, and tonight will be good with the women’s world cup-USA vs China and then the some baseball Cards vs Cubs. We’ll see if the Cubs can make some kind of statement to the best team in baseball-the Cardinales, Los Birdos, the Birds on the Bat. I doubt it, for they are the Cubbies.
LikeLike
These companies continue to insert themselves in a process that is really not their purview. Any institution of of higher education that would allow them into the student selection process is probably a school you really should not attend. Schools should ignore them, and, hopefully, they will just go away Let’s hope that what “the market tells them.”
LikeLike
The “reform” movement has emphasized “data” as an objective means of evaluation. Since the data show that the ACT is NOT predictive of college success, it should NOT be required (as it is in Florida to receive a state sponsored Bright Futures Scholarship). It is best to use the data to refute the social/political assumptions.
LikeLike
Tis quite a bit beyond outrageous.
I believe it was Aristotle who said something about treating others as “ends” and not “means”. Using the students to enhance the scurrilous test makers own coffers without permission and explicitly stating what is being done. I would say “one-eyed rattlers” but that would be a disservice to rattlers. Take your pick(s) and add them together:
barbaric, brazen, disgraceful, egregious, heinous, horrendous, horrible, inhuman, scandalous, scurrilous, shocking, violent, wanton, abominable, atrocious, beastly, contemptible, contumelious, corrupt, criminal, debasing, debauching, degenerate, depraving, disgracing, flagitious, gross, ignoble, iniquitous, malevolent, monstrous, nefarious, odious, opprobrious, shameless, shaming, sinful, unbearable, ungodly, unspeakable, villainous, and/or wicked. (thanks to thesaurus.com)
LikeLike
I feel sorry for them. They are going to monitored, tracked and measured every moment they are in school.
It’s just freaking grim.
LikeLike
Some of this is new, but some is not. Some is not transparent to the student at all, some is more transparent than the way it was presented in the original article.
The changes with the writing test sub scores are new to me.
The aggregation of the math and science subject scores into a STEM score and the English, reading, and writing into an ELA score started two years ago with the Aspire and have now rolled up to the main ACT assessment. Same for the understanding complex texts and progress toward career readiness indicators.
The career readiness levels of bronze, silver, and gold are from the National Career Readiness Certificates ACT offers based on the WorkKeys assessment. The WorkKeys has been around for a few decades, but there has been a recent trend to include it as a universal high school assessment in states like Illinois, Louisiana, and Wisconsin. Some states are pushing this and ACT is now predicting WorkKeys results from Aspire and ACT results. While there are some employers who look at WorkKeys scores, I am not sure they have any general currency in the job market.
The sub scores like Usage/Mechanics, Plane Geometry/Trig, etc. have been around for a long time. They are in the institutional reports high schools and colleges receive and have been on the student report too. The student self-reported information has also been included in institutional reporting for for as long as I can remember.
The ACT college readiness benchmarks are not new either and how a student’s subject score compares to the benchmark actually is displayed on the student report.
Another thing that is new to me is that ACT has extended the college readiness prediction from first year courses like College Algebra to program GPA’s in fields like Business Administration. It’s possible institutions of higher education that purchase ACT Research Services have had this information before, but now the rest of us know that these types of predictions are being reported to colleges and universities.
Students should be able to see on their report, or on an easily available and readable Internet resource, the additional information such as the program success estimations. ACT really needs to rethink the way they are restricting this information from students.
All of these estimations of future course success, program completion, and preparation for a specific career are based on statistical analyses that are technically careful but very aggregate. When a student’s ACT English score is associated with a 48% chance of a B or better in Freshman English, that is the correlation across the universe of institutions of higher education. It is not specific to a local community or technical college, a state system 4 year campus, or a highly selective public or private institution. The same is true for the WorkKeys readiness benchmarks. All of the measurement limitations of a standardized assessment apply (as Senor Swacker states or will state more eloquently than I) and are exacerbated by loose and general nature of the entire question.
ACT is probably responding to a desire articulated by colleges and universities. Even more than ACT, shouldn’t we be asking institutions of higher education what kind of weight these numbers have in admissions and placement decision making?
LikeLike
perfect fodder for John Oliver to follow up on his last spin
LikeLike
Some states like mine (Alabama) require all students to take the ACT. It is given to all juniors on a school day here in Alabama. Also, Alabama went with ACT Aspire for testing 3rd-8th graders. Anyone have much research on that test? I heard of a few parents opting kids out in northern Alabama, but I live in the city with Alabama’s largest public school system and things are quiet around here. Parents have been told by the Superintendent as well as principals to expect low scores for a couple of year and the Alabama College and Career Ready Standards (simply a re-brand of CC with the 15% allowed standards added in) will/are serve/serving the purpose of having all students ready for college and/or a career.
LikeLike
This could possibly be in response to a growing number of colleges who are making the SAT/ACT an optional requirement for admission. This allows the ACT to establish a market for their services even as greater emphasis is placed on GPA and less on test scores.
I believe colleges are using test scores as a proxy to determine student SES in an effort to target wealthy families. This “service” would allow that practice to continue even if the SAT/ACT no longer becomes a requirement.
LikeLike
The frustrating part is that several states, including Utah, REQUIRE juniors to take the ACT. So they are turning all of the students’ information over to a private company, who makes money by selling that information to even more private companies. It’s disgusting.
LikeLike
Yet another example”
Our “Brave New World” exemplified.
LikeLike
I have to (ruefully) laugh. The college our daughter attended required neither the ACT nor the SAT (that wasn’t the case at the time before she’d applied, so she–being a perfectionist honors/AP student–sweated bullets, & took the test twice, because SHE wanted her English score to be in the 30’s {it was, & the 2nd test date, unfortunately, came up right after her grandfather’s death, but she felt compelled to take it}–she did, in fact, achieve that high score).
Anyway, when she was a high school junior (back in 2004), she DID refuse to take the PSAT, w/good reason–“I’m not taking the SAT, so why should I take the PSAT? Waste of time!”
An opt-out(er) after my own heart!
LikeLike
“An opt-out(er) after my own heart!”
rbmtk: You and your husband (tell him hello) obviously did a good job in raising your daughter!
LikeLike
Declining to take the PSAT does have the consequence of making a student ineligible for the National Merit Scholarship competition, so it’s probably not a well-advised move for a student who is likely to score well on it.
LikeLike
And the best predictor of PSAT score (just like the SAT) is family income.
So, the “merit” scholarships awarded are not really for “merit.” They go mostly to students who don’t need the financial help.
LikeLike
I’ve been saying on this blog for some time now that the ACT and SAT are bogus; they don’t predict college success, they primarily measure family income, and they are used by colleges – especially the “elite” ones – for nefarious purposes.
But it’s deeper than just that. The College Board and ACT, Inc. were instrumental in creating the Common Core and both organizations say they’ve “aligned” all of their products to it. It goes deeper still.
ACT prominently touts that its “ACT’s Course Standards and College Readiness Standards successfully align with the Common Core State Standards.” The ACT also says that “the ACT Course Standards are empirically derived course standards that form the basis of ACT’s high school instructional improvement program.”
There’s an entire “alignment” package for sale:
Click to access CommonCoreAlignment.pdf
Meanwhile, the College Board says that it “has been a strong advocate for and played an active role in the development of the Common Core State Standards. As part of this collaboration, the College Board helped draft the standards…” The College Board has developed what it calls “the College Board Standards for College Success” which are “benchmarked against the Advanced Placement Program.” Moreover, the College Board says that its “Readiness Pathway,” which includes “the PSAT/NMSQT and the SAT — college readiness assessments taken by millions of students annually,” is also “aligned” with the Common Core. It adds this:
“As new assessments emerge and existing assessments are enhanced, the College Board will conduct additional studies to understand the alignment of other forms of assessments that may be administered in support of the Common Core State Standards, including end-of-course and end-of-domain assessments.”
Click to access Common-Core-State-Standards-Alignment.pdf
The ACT and SAT are horribly bad at predicting success in college. College enrollment specialists and consultants find that the SAT predicts between 3 and 14 precent of the variation in freshman-year college grades. As one pointed out, “I might as well measure shoe size.”
The ACT is only marginally better than the SAT at predicting college success. The authors of a study in Ohio found the ACT has minimal predictive power. In their concluding remarks, the authors ask, in amazement, “…why, in the competitive college admissions market, admission officers have not already discovered the shortcomings of the ACT composite score and reduced the weight they put on the Reading and Science components. The answer is not clear. Personal conversations suggest that most admission officers are simply unaware of the difference in predictive validity across the tests.”
The authors suggest ulterior motives on the part of college officials:
“An alternative explanation is that schools have a strong incentive – perhaps due to highly publicized external rankings such as those compiled by U.S. News & World Report, which incorporate students’ entrance exam scores – to admit students with a high ACT composite score, even if this score turns out to be unhelpful.”
We’ve known this for a long time:
“The ACT and the College Board don’t just sell hundreds of thousands of student profiles to schools; they also offer software and consulting services that can be used to set crude wealth and test-score cutoffs, to target or eliminate students before they apply…That students are rejected on the basis of income is one of the most closely held secrets in admissions; enrollment managers say the practice is far more prevalent than most schools let on.”
The net result is this: “More and more, schools are chasing the small number of students who have the money or the test scores that help an institution get ahead. As those students command higher and higher tuition discounts, they leave a smaller and smaller proportion of the financial-aid budget for poor students, who are increasingly at risk of being left out of higher education.”
So, here’s the situation. The Common Core standards are being touted as “necessary” to improve public education and American economic competitiveness and prosperity. The ACT and College Board are pimping for more testing and more “academic readiness pathways” in the lower grades. And it’s all a fiction.
It’d almost be laughable if it were not insidious.
So, why do so many politicians and public school educators – not to mention students and parents – buy into the nonsense?
LikeLike
“I’ve been saying on this blog for some time now that the ACT and SAT are bogus;”
Yes, not only bogus but COMPLETELY INVALID as shown by Noel Wilson. (thanks for the opening, democracy, I know you know all this but many other new readers don’t).
“Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700
Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine. (updated 6/24/13 per Wilson email)
1. A description of a quality can only be partially quantified. Quantity is almost always a very small aspect of quality. It is illogical to judge/assess a whole category only by a part of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as unidimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing the descriptive information about said interactions is inadequate, insufficient and inferior to the point of invalidity and unacceptability.
2. A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
3. Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
4. Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other word all the logical errors involved in the process render any conclusions invalid.
5. The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. And a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
6. Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
7. And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it attempts to measure “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self-evident consequences. And so the circle is complete.”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
LikeLike
We at ACT wish to respectfully correct some possible misperceptions that may arise from this post and other blog postings on this topic.
First and foremost, ACT, college admissions professionals, and prospective students all share the same interest—achieving educational success. We believe it is in the best interest of all concerned that a college admission decision be based on the best possible data available, pertinent to that institution, as a measure of whether the student is equipped to be successful on that campus. We provide this free, optional service to participating institutions to help them assess whether a specific applicant is equipped to succeed on that campus based on the totality of information available.
Also, it may be helpful to know that:
— The Prediction Service is not new. ACT has offered this data to participating institutions for more than 35 years. Approximately 200 of the nation’s colleges currently participate.
— The data is offered at no charge to participating institutions. We do not sell this data.
— ACT does not provide recommendations about whether or not students should be accepted. We provide only data and information so that colleges and universities can make more informed admission decisions themselves.
— The participating college/university specifies the data they are interested in having ACT analyze and report; therefore the data is institution specific.
— Chances of success data are reported only for students who send their scores to that institution.
Any data provided by ACT is just one part of a holistic approach to admission decisions. Thank you for the opportunity to comment.
LikeLike