A major new study finds that college entrance tests matter less than high school grades. Indeed, the rise of the test-tutoring industry has increased the significance of family income as a predictor of scores on the ACT And SAT. Tutors can earn hundreds of dollars PER HOUR to prepare young scions for the tests.
Here is the press release about the study by William Hiss, former head of admissions at Bates College:
FairTest
National Center for Fair & Open Testing
for further information:
Bob Schaeffer (239) 395-6773 / cell (239) 699-0468
for release Tuesday, February 18, 2014
TEST-OPTIONAL ADMISSIONS ADVOCATES APPLAUD NEW STUDY:
RESEARCH FINDS ELIMINATING ACT/SAT SCORE REQUIREMENTS
PROMOTES EQUITY AND ACADEMIC QUALITY
A major study released today shows that ACT/SAT-optional schools increase campus diversity without harming academic performance. Defining Promise: Optional Standardized Testing Policies in American College and University Admissions analyzed the records of 123,000 students at 33 institutions.
“This landmark research shows that test-optional plans promote both equity and excellence,” said Robert Schaeffer, Public Education Director of the National Center for Fair & Open Testing (FairTest). “More colleges and universities now have the data to support dropping ACT/SAT requirements.”
FairTest leads the movement to de-emphasize admissions test scores. The group’s website lists more than 800 test-optional four-year schools (http://fairtest.org/university/optional). The database includes more than 150 institutions ranked in the top tiers of their respective categories.
Among the key findings of today’s report:
– Students admitted without regard to their ACT or SAT scores do as well academically as those entering under regular criteria.
– Test-optional admission is particularly valuable for first-generation, minority, immigrant, rural students and learning-disabled students.
– High school grades are much stronger predictors of undergraduate performance than are test scores.
– Standardized testing limits the pool of applicants who would be successful in college.
– Test score requirements for “merit” scholarships block access for many talented students.
The schools analyzed include private colleges, public universities, minority serving institutions and art institutes. William Hiss, former head of admissions at Bates College was the project’s primary investigator. The study is online at:
http://www.nacacnet.org/media-center/PressRoom/2014/Pages/BillHiss.aspx .
– – 3 0 – –
– A timeline of schools de-emphasizing ACT/SAT scores over the past decade and a list of top-ranked test-optional institutions are available on request.
High school grades reflect perseverance. Tests reflect a moment in time. Tests also reflect nervousness about “the unknown”. There are some among us who have this admiration for taskmaster rigor like Professor Kingsley, was it? If learning has to be all about scaring people into pushing them to their maximum potential, I question the humanity of such “rigor”. But it takes all kinds. I am just very saddened by how some view “education”.
Yet another study showing this.
I find this hilarious. The education deformers fall all over themselves saying that our schools are failing and that the failure is due to our lousy teachers. But, if you correct for the socio-economic status of kids taking the international exams, our students are at the very top on the deformers’ own preferred measures–international standardized tests. So those failed teachers in those failed schools must be doing something right.
And study after study shows that the grades given by those autonomous, independent “failures,” our teachers, are much more highly predictive of college success than are the standardized tests designed specifically for the purpose of such prediction. The SAT is probably the most well-vetted standardized test on the planet, and its predictive value is next to nil. We’ve long known this to be so, and this study is just further confirmation of that.
So much for the whole theory on which ed deform is based.
Look, science proves its worth by its predictive value. Feynman rightly claimed for quantum chromodynamics the status of the best that science has to offer because its predictions agree with measured outcomes to a higher degree of accuracy than has ever been achieved by humans in any endeavor.
Those much-maligned teachers, with their much-maligned subjective grading systems, do a MUCH BETTER job of prediction–are collectively more scientific–than are the silly tests worshipped by the deformers.
Ed deform is faith-based pseudoscience, and the ed deformers’ “data-based decision making,” based as it is on these flawed instruments, is purest numerology.
But hey, Lord Coleman, who gave us the breathtakingly amateurish CC$$ in ELA, is now busily Coring the SAT. Can he make the SAT even worse? Have a look at the Common Core College and Career Ready Assessment Program (C.C.C.C.R.A.P.) now being prepared and the experiences with that in the states where those CCCCRAP tests have been piloted. Yes, the SAT can get even worse. Mark my word–that the Cored SAT will be even more disastrous is INEVITABLE.
I really can’t wait for the SAT train wreck. A lot of angry white suburban tiger moms will be left reeling.
ecologies are better than are monocultures; you don’t get innovation and achievement of varying individual potential via standardization
After the devastation that the CC$$ will bring on our land, that will become the “common” wisdom, but a great deal of damage will be done in the interim.
I am curios if anyone else thinks that it is significant that the study cited here did not look at students who scored in the top 50% of their class on standardized exams. Here is the central quote as far as public universities are concerned:
Across the study, non-submitters (NOT INCLUDING THE PUBLIC UNIVERSITY STUDENTS WITH ABOVE AVERAGE TESTING, to focus on the students with below-average testing who are beneficiaries of an optional testing policy) earned Cumulative GPAs that were only .05 lower than submitters, 2.83 versus 2.88. The difference in their graduation rates was .6%. With almost 123,00 students at 33 widely differing institutions, the differences between submitters and non-submitters are five one-hundredths of a GPA point, and six-tenths of one percent in graduation rates. By any standard, these are trivial differences. (emphasis is mine).
So amoung students that did relatively poorly on standardized exams, the exact scores did not have much predictive power.
I will also try to find out if the study used weighted or unweighted GPA. My state typically does not weight high school GPA and unweights any weighted GPA sent in by applicants.
I think I found the answer. Students who report a high school GPA above 4.0 are coded as 4.0. Given that the study covers students entering college between 2003 and 2010, I would think that the increased use of weighted GPA would mean that some number of the 2003 graduates not coded with a 4.0 would have been coded with a 4.0 had they entered college on 2010. In addition, students with the same grades in the same classes might well have different reported GPA, depending on which university they attend. My institution, for example, would report an unweighted GPA that might well be below 4, while that same student, if they had chosen a different college or university, would have been coded as a 4 if that institution used a weighted GPA.
Just because I’m a curmudgeon, I’ll also note what’s probably the most obvious point when you actually look at the study: it study only examined schools with “optional testing policies,” i.e., schools that make “admission decisions without standardized testing being used as an admissions credential for all students.” Twenty-seven of the 33 schools don’t require entry tests at all. The other six require entry tests to be submitted but guarantee admission for students in the “top X%” of their high school class.
More curmudgeonly questions:
* When can I expect to read the 7-part analysis of fatal flaws in the study’s sampling methodology?
* Is there some shadowy force that has funded this study that I should be aware of?
* Were the students at these schools informed that their data would be disclosed to third parties for mining and analysis, or given a choice to opt-out? (The schools disclosed records for 122,916 students, with data fields that included, at a minimum, students’ high school and college academic performance, race, gender, immigrant status, address, information that alumni provided to their schools about their post-graduation, and, for students at 8 of the schools, whether the student was classified as “learning disabled.”)
And while I’m at it:
* What security protocols applied to the transfer and storage of the data?
* What data fields were included in the disclosures?
* Was all personally identifiable student information such as name, address, and SS# de-coupled from the rest of the data?
* Did schools disclose the data of students who are minors?
* Did the authors destroy the data after the completion of the study, and if not, what limitations exist on its use?
* Did the authors enter into confidentiality agreements with the schools?
* If so, did those agreements permit the authors to disclose the data to third parties?
* If third parties were permitted to receive the data, what security protocols and confidentiality obligations were they bound by?
* Were third-party recipients required to destroy the data after they no longer needed it for purposes of this study?
* Have the authors agreed to be liable for any unintended disclosures or security breach?
Etc.
Our daughter has never been a great bubble test taker and gets extremely nervous when taking one of these idiotic tests that may decide her fate.
No matter what I told her, she wouldn’t not calm down before a test and this nervous state played into the results.
For that reason, I suggested she focus on her grades and a sport with a goal to graduate from high school as a scholar athlete. Her SAT scores were slightly below average and not the average score reported in the media that most applicants to Stanford earned on the SAT.
But she was accepted to Standford based on her grades, social qualities, demonstrated leadership skills and the fact that she did graduate from high school as a scholar athlete.
The results of these bubble tests are as meaningless as IQ scores that also come mostly from bubble tests.
In fact, I’ve never done well on bubble tests either and my wife arrive din the US with a sub par education from China that didn’t go beyond middle school during the tumultuous years of Mao’s Cultural Revolution when little teaching or learning was taking place in China.
For my wife, motivation drove her to success—not a SAT score. When my wife arrived in the United States on a student Visa, she didn’t speak or understand English. Faced with the risk of deportation if she didn’t learn English, six months later she had learned the language enough to understand people. The rest is history.
If anyone wants to measure an individual’s odds of future success in life, observe their motivation and what they do to achieve their goals. Don’t pay attention to anything else.
well said, Lloyd!
Now, if only ALL colleges and universities would get on board and agree to ditch (or de-emphasize) the SATs. Not only would college-bound students be able to focus on activities aside from SAT prep, but the entire SAT/College Board machine would disintegrage and collapse. Or is that too much to ask?
It may depend on how many people the SAT/College Board machine employes and the amount of money involved.
If they are “too big to fail” then we are probably stuck with them without a major social movement similar to the Vietnam anti-war movement in the early 1970s that led to the end of that ideologically driven corporate war that killed millions while profiting the few.
Another way to end the use of the SAT as a measurement tool is if the majority of universities stopped using it as part of the admissions process. Mass letters of protest to all universities might help achieve this.
The same could be said, Lloyd, about the ACT. There are states, including KY, that administer it to all juniors and use it in the accountability system. ACT also has the QualityCore (end-of-course)as well as the up-and-coming ASPIRE CCSS tests.
Tests mean nothing and letter grades are so flexible they are blatant lies. Time for a change. http://www.wholechildreform.com
This is great. Stop supporting David Coleman who now runs the College Board. Nobody gives a #@%##! what number his tests assign to a young person. They are apparently meaningless in the big scheme of things and as a predictor of college success.
While its great that yet another study shows what we have known for several decades (that high school GPAs are equal to or better at predicting college success than standardized tests), shooting the dead “standtest” horse will do nothing to the reform movement’s agenda or its emphasis in standardized tests (of any kind). The challenge is not so much scientific, as it is political and economic – powerful economic incentives push the testing industry forward, and distort the priorities of data obsessed politicians and businessmen (who know little about data measurement in education, who come from the greatly simplified world of business where relevant outcome metrics are “reducible” to profits, or in the case of politicians who depend on the financial contributions of these very businessmen. Its got to be said – just follow the money and the political power that comes with it. One doesn’t have to really question the noblesse oblige of the reformers, or their love of children to see that the economic and political drivers simply distort things in ways that are damaging.
At my institution ACT scores do predict college success, perhaps because the range of ACT scores is a much larger than high school GPA. Our ACT scores range from 16 to 36 (for SAT folks that is between 395 395 and 800 800) and cumulative GPA ranges between 3 and 4.
That may be at your school, but overall the ACT is only marginally better than the SAT at predicting college “success” (grades and completion). And as one college enrollment specialist said about the SAT’s predictive power, “I may as well use shoe size.”
Institutions like mine are the best place to look at this question because there is no selection bias. If a selective college or university admits a student despite a low ACT or SAT score, it is because they have some evidence that the score is a poor predictor of how well that student will do in the college. If admissions has done it’s job, that is what the analyst will find: students admitted with low standardized test scores did just as well as students who’s were admitted with high standardized test scores.
You need to look at institutions like mine that accept the majority of students based on their high school GPA (at least a 2.0 average over a set of academic classes). There is no standardized score selection bias and we admit many students that are unlikely to do well at the university.
All students in the state of Michigan take the ACT before graduating. I wonder how much money could be saved by eliminating that requirement?
Just heard a great discussion on this with the author of the study. It aired on public radio station KQED (SF Bay Area) on their daily public affairs program, Forum:
http://www.kqed.org/a/forum/R201402210930
We’ve known this for years.
A big problem in education is the fact that we ignore what we know.
Reblogged this on 21st Century Theater.
I would assume that one of the things that institutions find most attractive about standardized tests is that they provide more data points, and more granular data points, for use in the admissions process. Especially large institutions that are flooded in applications. Say you get 30,000 applications for 10,000 spots. You could pick the top 10,000 GPAs and leave it at that. You could try to review every single application and devise a grading system that takes other factors into account (5 points for debate club, 10 points for legacy students, 10 points for applicants who check the “minority” box, 5 points for a great personal essay, etc.). You could try to come up with some “value added” adjustment ratio to apply to each GPA based on the quality/difficulty of the high school. At some point, you might say, “You know what would be great: What if there were one test that all these applicants had to take, graded on the same scale?”
Well, that seems to be what they do. But the question is to decide if it is effective or fair. Does the process select the “best” students or the best test takers? Does it really have meaning when a student takes college classes? Professors and majors have varying degrees of difficulty, class to class, year to year, major to major. So, is expediency the best way to determine “who” gets to go to which college? Probably not.
Maybe not. I suppose what I’m saying is that, at least for the biggest institutions, it’s probably very difficult if not impossible to apply a system that’s both fair and efficient enough to get through the pile. There are obvious fairness issues with a system that just ranks students in descending order of GPA. Points systems can take more factors into account, it requires a lot of arbitrary decisions about which factors to count and how much to weight them. The SAT is arguably an unfair sorting tool, too, I get that point. But it’s definitely not *obvious* to me that other tools, such as GPA ranking, are *more* fair. The study that’s highlighted in this post definitely does not convince me of that. If I were an institution that received 30,000, 40,000, or 50,000 applications, I would need to see much clearer evidence before I stopped using the most useful sorting tool in my toolbox.
At the most competitive places, applicants are not competing against all other applicants, but are generally competing against a subset of other applicants depending on the needs and desires of the institution. Athletes, for example, compete against other athletes, not the general pool of applicants. This was made very clear when the UC schools were prohibited from considering race in the application process.
Don’t know if I would categorize the study as ‘groundbreaking’. What would be a little shake up would be a study that acknowledges the fact that high school GPA is less a measure of academic proficiency than it is a measure of a child’s ability to push through chores. Since that’s the game at most colleges anymore, the results of the study make perfect sense.
It is so good to see that there is finally some sort of shift in the way that some students can get in to college. I can speak from experience, that not all students are the best test takers. I happen to be one of those students. Even though I was a strait A student in honors classes and in AP classes as well as an athlete playing 3 varsity sports, I did not score as high as I would have liked to on the ACT or SAT. I feel that looking at high school academics and extracurricular activities will give a better idea of how well rounded a student is than looking at the SAT. Looking at high school grades will show the effort that the student put in to studying and keeping up with assignments over the course of four years. It is a better representation of how the student will succeed in college when juggling classes and a job. Looking at high school transcripts will show the important concepts that the student had strengths and weaknesses in and can also help guide the student towards certain career paths.
In this study at least honors and AP classes count no more than gym. The 4.0 scale takes every class as being equivalent.
My middle son took seven classes at our local university that did not count in his high school GPA at all. High school GPA is one measure of academic ability, but there are other ways as well.
ACT is being gamed like never before. Test-prep programs are making the ACT useless as a gauge of potential college success.
deutsch29 : I seem to remember an article in the Atlantic that (in the 1970s) that said that SAT was first developed as part of a “gentleman’s agreement’ to keep Jewish scholars out of the colleges and keep the “WASP” college demographics…. does anyone on this list know of that? I read it in the 70s and shared it with a couple of male colleagues; didn’t get any response from them at that time . I will have to go back through my periodic/literature search skills (they have grown rusty too along with my memory). thanks for everything you and Diane do.
The SAT was developed by Carl C. Brigham, one of the pioneers of the IQ testing movement, and notorious for a book he wrote about how IQ is linked to race and ethnicity. He was a senior scientist at the College Board and worked on development of the SAT in the 1930s, when it was tried on an experimental basis. It was adopted on December 7, 1941, Pearl Harbor Day, when the college presidents realized we were going to war and there would not be time for young men to take 6-hour College Boards. It was not intended to keep out Jews, nor did it have. That effect. The colleges kept out Jews by their quota system to limit the number of Jews.
IQ tests are nothing but a measurement that actually means little to nothing and in the end feeds a curiosity factor that may lead to (so-called) bragging rights by individuals who score high.
But IQ by itself does not predict success in anything or reveal the character of an individual. All an IQ test does is allegedly reveal the potential for cognitive ability compared to others and even then it’s just another theory like evolution or creationism.
For instance, if Arnold Schwarzenegger and/or Jack LaLanne, both weak, sickly wimps in their youth, had never set goals to improve their health and strength by lifting weights and eating properly, they would’ve stayed weak, sickly wimps and probably would’ve died long ago.
Someone with a high IQ could be a high school drop out who is also homeless while another person with an average or lower IQ graduates from college and ends up owning a highly successful business.
Imagine that average person mentioned in the previous paragraph sitting on a bus bench next to the homeless guy with the high IQ, who then brags to the average girl that he has this high IQ and is superior to others.
What does that mean?
And because of racist attitudes, urban grades hold less meaning than suburban grades. Also because grades mean . Isn’t it time we give colleges something meaningfull? http://savingstudents-caplee.blogspot.com/2013/12/accountability-with-honor-and-yes-we.html
Any thoughts about states like mine that use unweighted GPA?
I grew up poor and disadvantaged. An SAT tutor like my friends had? Right. Neither one of my parents graduated from college–they had no clue.
My SAT scores were not impressive, but my accomplishments have been. I also had high grades in high school.
Any bubble test is a 1-day snapshot. Grades are what a student overcomes day after day.
I know my middle son’s GPA would have been higher if he had brought in those Kleenex boxes in sophomore English or had not taken seven courses outside the high school that did not count. Hard to say that his high school GPA was anything close to an accurate reflection of his academic performance. Standardized tests like the SAT, SAT II, and AP exams were better measures than his high school GPA.
I thought it was common knowledge that, for young women, the GPA is a much better predictor of college achievement than any test….
It seems to be true that young women get higher grades than standardized test scores would predict and young men get lower grades than standardized test scores would predict.
On average, girls are almost always better students. That may be due to the fact that most girls don’t have the same disruptive levels of testosterone running wild through their blood messing up a boy’s though process and ability to concentrate on anything but the cutest girl in sight.
But by age 25, most guys start to gain control over the disruptive factor of testosterone and start to catch up with the girls academically.
I’ve read a few of the studies on this topic. Most boys starting at adolescent have a serious handicap when it comes to academics and it’s brought on by that flood of testosterone.
The difference in boy’s performance on standardized exams and class grades begins earlier than that. See this study of grades and standardized scores in primary school:http://www.terry.uga.edu/~cornwl/research/cmvp.genderdiffs.pdf
More proof that for most boys, it’s really a challenge being a boy. And even a bigger challenge for teachers struggling to teach those boys who mostly aren’t interested in what they are being taught. Their interest is directed elsewhere.
Add attention deficit disorder and/or dyslexia to that boy, for instance, and the challenge for the teacher magnifies.
My second year as a teacher (76-77), I taught a 5th grade class and half of the boys had ADD and/or dyslexia. The previous teacher had a heart attack and died. I was the 13th substitute in 13 days and was the only who who stayed. The other twelve refused to return. The school offered me a long term position until the end of that school year.
One boy named James (real name but without the last name provided) also had anger issues in addition to his ADD and severe dyslexia.
I will never forget James.
I had to move him into a corner behind my desk and use a book shelf to block his view of the other students because all it took was a look and he was off like a rocket to physically attack that other student he thought might have been glaring at him.
Once, I was at my desk after I had spent half of the period to settle the class down so they could work on a math assignment, and it was unusually quit in that corner behind me. Worried, I glanced around to see James upside down on his desk with the top of his head in contact with the desk top and his feet pointed at the ceiling and he was spinning like a top.
Sigh! And the plutocrats who are jerking Obama’s puppet strings would accuse me of being a failure as a teacher becasue James learned very little in that classroom.
The results here suggest that boys are taught, they do at least as well as girls in math and science standardized exams. Boys grades, however, seem to be discounted because of comportment and do not reflect their academic ability. Perhaps that has something to do with the gender imbalance at places like UNC Chapel Hill (58% female, 42% male).
TE, that gender imbalance is nationwide. Boys are falling WAY behind on just about every measure–percentage graduating from high school, entering college, completing college, completing an M.A. If the situation were reversed, if the numbers for girls were this bad, I think that we would be having a national conversation about why our schools are failing girls. So, I’m glad you brought this up.
Part 1
“Stupid is as stupid does.”
Forrest Gump
The SAT is neither an achievement test , nor is it a (good) predictive test. In fact, the acronym SAT stands for nothing at all. Those who promote the ACT and the SAT as good measures of “college readiness” should be given a good swift kick in the seat of the pants. We are told by the manufacturers of these tests that they are very good predictors of college success. Indeed colleges say that’s why they use them. But it simply isn’t true.
The authors of a study in Ohio found the ACT has minimal predictive power. For example, the ACT composite score predicts about 5 percent of the variance in freshman-year Grade Point Average at Akron University, 10 percent at Bowling Green, 13 percent at Cincinnati, 8 percent at Kent State, 12 percent at Miami of Ohio, 9 percent at Ohio University, 15 percent at Ohio State, 13 percent at Toledo, and 17 percent for all others. Hardly anything to get all excited about.
Here is what the authors say about the ACT in their concluding remarks:
“…why, in the competitive college admissions market, admission officers have not already discovered the shortcomings of the ACT composite score and reduced the weight they put on the Reading and Science components. The answer is not clear. Personal conversations suggest that most admission officers are simply unaware of the difference in predictive validity across the tests. They have trusted ACT Inc. to design a valid exam and never took the time (or had the resources) to analyze the predictive power of its various components. An alternative explanation is that schools have a strong incentive – perhaps due to highly publicized external rankings such as those compiled by U.S. News & World Report, which incorporate students’ entrance exam scores – to admit students with a high ACT composite score, even if this score turns out to be unhelpful.”
The SAT is no better. College enrollment specialists say that their research finds the SAT predicts between 3 and 15 percent of freshman-year college grades, and after than that nothing. Shoe size would work as well, or better.
The SAT is used by colleges for their own nefarious reasons, like boosting their rankings in “best” colleges lists and leveraging financial aid (increasingly and especially for students who don’t need it).
Here’s Princeton Review founder John Katzman on the SAT (and Princeton Review does quite a bit if test prep for the SAT):
“The SAT is a scam. It has been around for 50 years. It has never measured anything. And it continues to measure nothing. And the whole game is that everybody who does well on it, is so delighted by their good fortune that they don’t want to attack it. And they are the people in charge. Because of course, the way you get to be in charge is by having high test scores. So it’s this terrific kind of rolling scam that every so often, somebody sort of looks and says–well, you know, does it measure intelligence? No. Does it predict college grades? No. Does it tell you how much you learned in high school? No. Does it predict life happiness or life success in any measure? No. It’s measuring nothing. It is a test of very basic math and very basic reading skill. Nothing that a high school kid should be taking.”
http://www.pbs.org/wgbh/pages/frontline/shows/sats/interviews/katzman.html
Part 2
Here’s The Big Test author Nicholas Lemann on the SAT:
“The test has been, you know, fetishized. This whole culture and frenzy and mythology has been built around SATs. Tests, in general, SATs, in particular, and everybody seems to believe that it’s a measure of how smart you are or your innate worth or something. I mean, the level of obsession over these tests is way out of proportion to what they actually measure. And ETS, the maker of test, they don’t actively encourage the obsession, but they don’t actively discourage it either. Because they do sort of profit from it…every time somebody takes an SAT, it’s money to the ETS and the College Board. But there is something definitely weird about the psychological importance these tests have in America versus what they actually measure. And indeed, what difference do they make? Because, there’s two thousand colleges in the United States, and 1,950 of them are pretty much unselective. So, the SAT is a ticket to a few places.”
http://www.pbs.org/wgbh/pages/frontline/shows/sats/interviews/lemann.html
The ACT and the SAT are used for the purpose of “financial-aid leveraging.” Instead of using a $20,000 scholarship for one needy student, schools can break that amount into four $5,000 grants for wealthier students who score higher, who will pay the rest of the tuition ($15,000 a year) and who will bring the school more cash and “will improve the school’s profile and thus its desirability.”
As Matthew Quirk wrote, “The ACT and the College Board don’t just sell hundreds of thousands of student profiles to schools; they also offer software and consulting services that can be used to set crude wealth and test-score cutoffs, to target or eliminate students before they apply…That students are rejected on the basis of income is one of the most closely held secrets in admissions; enrollment managers say the practice is far more prevalent than most schools let on.”
See: http://www.theatlantic.com/magazine/archive/2005/11/the-best-class-money-can-buy/4307/
This is how the ACT and SAT are now used. It truly is sad. And it’s wrong.
It’s way past time for educators to stop perpetuating the ACT/SAT myth.
Reblogged this on Docjonz's Accounting Blog and commented:
Given the proliferation of “test preparation” services, the unfortunate, but ever increasing reliance of standard testing approaches to categorize pre-college children, & all of the academic research pointing out the absurdity of SAT style exams, it’s great to read that, in their admissions decisions, colleges & universities are giving more weight to a student’s academic performance than to their standard test performance.
How about a google hiring requirement?
http://mobile.nytimes.com/2014/02/23/opinion/sunday/friedman-how-to-get-a-job-at-google.html?_r=0&referrer=