A new blogger enters the education fray with timely questions about the validity, reliability, and fairness of the Smarter Balanced Assessment, the Common Core test paid for by the U.S. Department of Education.
Dr. Roxana Marachi, Associate Professor in the Department of K-8 Teacher Education at San Jose State University has launched a blog called http://eduresearcher.com/.
In this post, she raises important questions, such as:
Q1: How is standardization to be assumed when students are taking tests on different technological tools with vastly varying screen interfaces? Depending on the technology used (desktops, laptops, chromebooks, and/or ipads), students would need different skills in typing, touch screen navigation, and familiarity with the tool.
Q2: How are standardization and fairness to be assumed when students are responding to different sets of questions based on how they answer (or guess) on the adaptive sections of the assessments?
Q3: How is fairness to be assumed when large proportions of students do not have access at home to the technology tools that they are being tested on in schools? Furthermore, how can fairness be assumed when some school districts do not have the same technology resources as others for test administration?
Q4: How/why would assessments that had already been flagged with so many serious design flaws and user interface problems continue to be administered to millions of children without changes/improvements to the interface? (See report below)
Q5: How can test security be assumed when tests are being administered across a span of over two months and when login features allow for some students to view a problem, log off, go home (potentially research and develop an answer) and then come back and log in and take the same section? (This process was reported from a test proctor who observed the login, viewing and re-login process.)
Q6: Given the serious issues in accessibility and the fact that the assessments have yet to be independently validated, how/why would the SmarterBalanced Assessment Consortium solicit agreements from nearly 200 colleges and universities to use 2015 11th Grade SBAC data to determine student access to the regular curriculum or to “remedial” courses? http://blogs.edweek.org/edweek/curriculum/2015/04/sbac.html.
She includes a startling graph produced by SBAC, with projected failure rates in the 11th grad math tests for different subgroups.
67% of all students are expected to fail
83% of African-Americans ” ”
80% of Latino students ” ”
93% of English language learners ” ”
She adds:
“Evidence of Testing Barriers and Implementation Problems
The Board is encouraged to consider the following evidence documenting serious concerns regarding the validity, reliability, security, accessibility, and fairness of the SmarterBalanced Assessments.
SmarterBalanced Mathematics Tests Are Fatally Flawed and Should Not Be Used documents serious user-interface barriers and design flaws in the SmarterBalanced Mathematics assessments. According to the analyses, the tests:
“Violate the standards they are supposed to assess;
Cannot be adequately answered by students with the technology they are required to use;
Use confusing and hard-to-use interfaces; or
Are to be graded in such a way that incorrect answers are identified as correct and correct answers as incorrect.”
“The author notes that numerous design flaws and interface barriers had been brought to the attention of the SmarterBalanced Assessment Consortium during the Spring 2014 pilot test, and remained unresolved during the Spring 2015 test administration.”
The post includes comments by teachers and administrators about the problems with SBAC.
She closes her blog with this reflection on the predicted failure rates:
“My letter to the Board is to encourage responsible, ethical, and legal communications about the assessment data that will apparently soon be disseminated to the public. Students’ beliefs about themselves as learners will be caught up in the tangle of any explanations surrounding the assessments, and as we know, decades of research demonstrate the power of student belief to be a factor impacting subsequent effort and persistence in learning.”
“Students’ beliefs about themselves as learners will be caught up in the tangle of any explanations surrounding the assessments, and as we know, decades of research demonstrate the power of student belief to be a factor impacting subsequent effort and persistence in learning.”
It is incredible that the truth stated in the last paragraph is not universally understood. As teachers we live with it every day in our classrooms.
So important, yet ignored by the ignorant. Chronic, institutional failure will damage many students. The majority of fifth graders in NY wll be told that, for the third year in a row, they are failures in math and ELA. Vulnerable students at very a very vulnerable age.The harm will be irreparable. So not only invalid, but inflicting permanent harm to many kids.
All of the most literate, sharp adults I know who have taken the SBAC ELA tests are baffled by many of the questions. One said to me, “There WAS no valid answer offered.” If the ELA test is a confusing morass for a highly educated adult, how in the world is it appropriate to give it to a child? The “expert” psychometricians who built this thing say “trust us”. I do not trust them. I think a massive hoax is being perpetrated on the country. Do not defend these tests unless you’ve taken the on-line practice tests and can vouch for their quality!
These Common Core tests are academic death traps, designed to trick, frustrate, confuse, tire out, and wear down all but the most advantaged and disciplined students. They no more measure reading and writing abilities (or instructional quality) than a broken thermometer can measure the beauty of a perfect summer day.
Why do we continue to attack reading from this bass-ackwards direction. The vast majority of students can decode. Struggling readers have comprehension problems because of lack of background knowledge. Ignoring dylexia is a whole other issue.
“The “expert” psychometricians who built this thing say “trust us”.”
Those expert psychometricians are just as expert as expert phrenologists, expert eugenicists, expert astrologers, expert psychoanalysts were. Exceedingly excellent experts in their fields. . . so much so that only they know just how exceedingly excellent they are!
Hello! I am a parent advocate who lives in FL. We are using AIR (SBAC) for our tests and here is the email I sent to the FL DOE with my questions and concerns. I sent the letter/email on 7/6 and have yet to hear back. I printed out and went through the 2,245 page AIR contract.
Dear Commissioner Stewart,
I am an extremely concerned parent of an incoming 5th grader. I am concerned about her future. I started researching the Common Core State Standards and AIR over a year ago. I printed out the executed contract (14-652) that you signed with AIR and went through the entire contract. I am not the only parent that did this. Here are some of my questions and concerns (as well as thousands of other parents across the state of Florida) and I am hoping that you can answer my questions.
1. Florida adopted Common Core in 2010. In the ITN the FL DOE stated “Florida is seeking a state assessment that will measure the full breadth and depth of the Common Core State Standards”. There are parents and teachers in this state that are under the impression Florida did not adopt the Common Core State Standards – we did. It is clearly spelled out in the contract.
2. You signed the $220M contract (it’s actually over $320M according to the executed contract) with AIR without parental or public input. There were provisions in the contract for “public input” and feedback but we have yet to find anyone that was “invited” to participate in such a meeting. Nor did the taxpayers get to vote on how our taxpayer dollars would be spent! When exactly were these meetings scheduled and who participated in them?
3. In the executed contract with the FL DOE and AIR – AIR was to “provide empirical evidence of psychometric validity” to the DOE prior to the test being given. That never happened. You gave our kids a test that has not been validated and not just here in FL – Utah’s test has not been validated either. AIR used Utah’s test (SAGE) questions on our test and now we are faced with having the “validity study”. This validity study should not even be occurring had AIR provided you with the validity documents as stated in the executed contract (14-652).
4. In March 4 2015 testimony to the FL Senate – you claimed the test was “field-tested” in Utah and told the Senators the test was “absolutely reviewed and is psychometrically valid”. You then told the Senators you would send them the validity documents. What you sent them was a 3 page document outlining the validity process NOT the actual validity documents that AIR was responsible for delivering per the executed contract.
5. AIR implemented the test in other states that had the same issues that Florida had (students could not log in, blank screens, students got logged out, etc.) Our FSA (Common Core) test was an epic failure of monumental proportions. Here are the documented testing failures from the Tampa Bay Times: 3/2/15 – 1st big failure and day 1 of testing, 3/3/15, 4/13/15, 4/21/15 – 2nd big failure, 5/13/15, 5/18/15. The Algebra I, Algebra II and Geometry EOC’s were part of the testing responsibility from AIR. When the EOC’s were deployed – teachers and students were shocked and dismayed to see half if not 3/4 of the information on those tests were not taught – because the students were not supposed to be taught that. Some teachers stated there were Trigonometry questions on the test. The DOE told a parent the EOC’s were also “field-tested” in Utah. Utah told us they don’t even use those tests.
6. NO TRANSPARENCY. No parent in this state can view the test their child took whether it be the FCAT or FSA. I find that to be a huge problem. Many parents (myself included) made requests to view the test our kids took and we were told NO. The DOE claims (from the contract) the tests are “to provide student academic achievement and learning gains data to students, PARENTS, …..” that is not true. Nor is this statement from the contract “…by the public to assess the cost benefit of the expenditure of taxpayer dollars.” If parents cannot view the tests and the public cannot view the tests – WHY is that language even in the FL statutes, laws and contracts? There is something seriously wrong if a parent CANNOT view the test their child took. This will be revisited.
7. PRIVATE DATA (PII) was freely given to AIR. Our children’s most private and personal information was freely given to AIR by the DOE. The language in the contract allowed for this to happen. AIR’s employee data was compromised due to a breach and now they have hundreds of thousands of (our) students data. AIR has already been breached this year while testing was underway. You (the DOE) are not enforcing the contract. AIR cannot even keep their own employee data safe.
8. VALIDITY STUDY. Had you held AIR to the executed contract (signed by yourself) “The contractor must show evidence that are psychometrically defensible and operationally feasible” we would not have to conduct this validity study by Alpine Testing Solution (THE only company that responded) who is a small, unknown company in Utah. How on earth can they possibly validate our test (FL) when Utah’s test has NEVER been validated? I can send you a copy of the email we received from the Utah Board of Education.
9. VAM – You are using the assessment results to measure teacher “effectiveness” with a formula that was originally designed to measure crop growth NOT learning gains. You are using the assessment results to rate our schools and worst of all – you are using the assessment results to rate our children. How can you possibly measure student progress, teacher effectiveness and issue school grades by using ONE assessment? That logic is absurd and there is no research based evidence that states it is effective and/or accurate. VAM is an insult to the teaching profession. You are holding our students, teachers and schools accountable. Who is holding YOU accountable? Who is holding AIR accountable? What is being done? We are fed up with the massive misuse and abuse of tax payer dollars.
Thank you Ms. Stewart – I sure hope to hear back from you or from someone at the DOE that can answer my questions. I sure hope you understand what is happening and has happened in this state this past year. I am a parent advocate and am very active in the Opt Out movement. Our opt out numbers have grown exponentially since the testing failures occurred. What happened to our kids, students and teachers in this state was nothing less than a disgrace. I have a lot more concerns and would be happy to share those with you. You can rest assure we will have higher opt out numbers this school year than we did last year. Ms. Stewart, we (parents, teachers and students) have been ignored and pushed aside all school year. Our teachers, parents and kids never got closure to the 2014-2015 school year and we are now faced with a new school year looming. Did you hear the stories about what happened to our kids during testing? Did you read the stories about what happened to our kids during testing? Our pleas for help went ignored by those who put this in place. Our kids had the worst possible school year imaginable and that is just wrong. It’s worse than wrong – it’s shameful.
Thank you for your time.
Respectfully yours,
Deb Herbage
Parent of 5th Grader
Pasco County
Deb, is there any way you can email me that AIR contract? I’ve been trying to get these from the Delaware Department of Education, but they charged so much for their contracts with them, I had to file a complaint with our State Department of Justice. They ruled our DOE overcharged 5 times the normal amount for my FOIA request, but it is still a lot of money (which I have a GoFundMe account going to raise the money). My email is kevino3670@yahoo.com
Thank you either way!
AIR is NOT SBAC. Florida bought their AIR testing this year from Utah (I’m in Utah). Not that the tests are any better (they’re not), but they’re not the same thing.
Using the SBAC scores in 11th grade to determine college placement is simply a ploy to get all students to take the test; parents are threatened into not opting out.
Since the rich private schools are not held to the same rigid standards of testing and the high failure rates have already been predicted, it frightens me to think that the evil ones are trying very hard to take away a high school diploma from the shrinking middle class and poor. What else could it be? Local boards and parents need to stand up and fight before it is too late. Only the rich children will be educated if this sci-fi fiasco is allowed to continue.
If I may correct your first statement a bit, ST:
“Since the rich private schools are not held to the same BS PSEUDO-standards. . . ”
Those aren’t “standards” they are demands, mandates, commands, coercions, intimidations, pressures and/or strong-armings, in other words extortions, black-mailings and fraudulent schemes.
Duane Swacker: perhaps it was just a momentary lapse, but you forgot to include in your list a couple of other descriptors—
Sucker punches. Mandated failures.
Always glad to lend a hand…
😎
Fantastic job, Dr. Marachi! I, too, was moved by your last paragraph. That’s as strong an argument as any against the poorly-designed SBAC (and its cousin PARCC) and against the artificially-low passing rate. The last thing we want is more students concluding, “I am not good at math.” The self-impression that forms in year one or two of testing could make instruction that much harder in subsequent years, with the unintended consequence that other measures (VAM anyone?) also stay in the gutter. What an ugly downward spiral we’ve gotten ourselves into.
How can standardization be achieved when the cutoff dates for grades vary from state to state? My child is a month older than their cousin in another state however they are a grade behind their cousin. What is the purpose of common core standards when if we moved to another state my kids will be behind a year? Our state cutoff date is August 31 when others go by calendar year.
The notion that nearly every student in every state would be on roughly the same page during any given time frame was was born of complete ignorance or shear deception. Any teacher could tell you that this was never going to happen for reason just as you describe. If the reformers really had this as a goal they would have developed a detailed, standardized “scope and sequence” that covered all subject areas – and they would have standardized enrollment ages as well. The devil is always in the details, and these simple ideas were conveniently overlooked. Now realistically, this too is just a pipe dream, but if they wanted to pretend, they could have at least used the a more believable model. Never forget that your efforts as a parent can always over-ride any school related issues.
From what I heard, the SBA will not be adaptive (being able to respond to answers to give a harder question or easier question based on the incorrect or correct answer)) for a couple of years. It takes a certain period of time to build up a bank of questions is the way it was explained to me. That said, this test is so badly written, vague, questions open to different interpretations, developmentally in appropriate, ect. that even after several years of being taught Common Core standards, remember the test and standards come together, I don’t see how most students will ever pass it. Pretty hard to accelerate cognitive thinking, although, I am sure that someone out there in profit land will put some product together and claim to do just that to increase scores at the same time making a bundle of money again using student labor. So wrong.
Don’t worry. I am currently working on a “Brain Development Accelerator” to fix this problem. The knucle-heads that are creating these tests didn’t realize that brain development could not be legislated.
I could add many, many more questions, but all the questions can be summed up basically with, “how can standardization and fairness be be assumed when children are not standardized and there’s nothing fair about a test that measures [sic] family socio-economic status?”
This gross testing which is enshrined in the profiteers support law, ECAA, must be publicly flogged so heavily that our stinky politicians flee like the rats they are from a sinking ship.
Another excellent contribution. It is interesting that this appears in a blog. Thanks for posting this where many more can read it.
Add to the needed disclosures for both tests, the actual costs of the tests–totals, and overhead charges, salaries of senior administrators, plus names of all subcontractors, their awards, names of funders other than USDE, and purposes for the use of those funds, and names of the persons in charge of contracting and quality assurance. In other words, a comprehensive audit available for inspection. That is a starter project, then it would be good to have some close scrutiny of the costs for administering this tests-time lost for instruction, staff time for administration, extra expensises for the technology and so on.
I also hope Dr. Marachi will send this summary to a lot of poeople who have the myth that these tests are so great that the scores should be used for determining college placement/admission.
It’s not that we haven’t known for many years about the many problems inherent in the educational malpractices that are education standards and standardized testing. As soon as standardized testing came into being in the first half of the 20th century that it was being shown that there were many problems. Banesh Hoffman wrote about them in the 60’s. Noel Wilson’s 1997 dissertation completely destroys these malpractices. Nothing new, folks, although it is good to get as much information out as possible about the deleterious effects on the students of these COMPLETELY INVALID educational malpractices.
Read and and understand Wilson’s never refuted nor rebutted treatise Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine. (updated 6/24/13 per Wilson email)
1. A description of a quality can only be partially quantified. Quantity is almost always a very small aspect of quality. It is illogical to judge/assess a whole category only by a part of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as unidimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing the descriptive information about said interactions is inadequate, insufficient and inferior to the point of invalidity and unacceptability.
2. A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
3. Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
4. Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other word all the logical errors involved in the process render any conclusions invalid.
5. The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. And a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
6. Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
7. And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it attempts to measure “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educationa
l standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self-evident consequences. And so the circle is complete.”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
Dr. Marachi wonders in Q3 how the test can be fair given the tech gap between rich and poor homes. By the same logic, and even more importantly, the tests are unfair because of the vast knowledge gap between rich and poor homes. The Hart-Risley study shows that professionals’ kids hear 35 MILLION more words at home by the age of FOUR than wefare parents’ kids. Not just more words, but richer vocabulary. This vocab and knowledge deficiency cannot be made up by schools –in fact, the gap tends to get wider as kids proceed through school –therefore all ELA tests’ (not just SBAC’s) results are not valid measures of kids’ inherent brainpower or schools’ efficacy, since they’re so heavily informed by this inherited intellectual capital deficit Standardized math tests have the potential to be more valid measures of a child’s school learning since most math education, unlike language education, occurs in the classroom.
Ponderosa,
I thought that the 35M was more like 3.5M.
Do you have a reference?
TIA,
Duane
http://literacy.rice.edu/thirty-million-word-gap
I guess it’s 30 million, not 35 million.
Thanks, Ponderosa for the link!
The implications of the study are mind boggling. Seems to me that would show we would need to dedicate disproportionately larger (don’t get me wrong there is no reason whatsoever to not do so) resources for those children with such language “deficits” (I hesitate to call them deficits). And to tell you the truth I don’t think there is enough political, ethical and/or moral will to do such a thing. I’m not even sure how much a “breach” into early childhood we should attempt, if any, so it would have to start later, pre-school or kindergarten. The civil liberty issues are indeed great and I’m not sure those concerns can be swept aside.
The dilemma is certainly a mind meld.
In So. CA a district gave 11th graders SBAC test. One girl finished 3 out of 5 questions on an essay question. Teacher said wait till tomorrow–you can go back. YOU CANNOT GO BACK. Her SCORE will be graded on 3 out of 5 possible answers. She was LOCKED OUT BY COMPUTER.
SEE THE HORRORS OF SBAC AT: http://www.smarterapp.org/specifications
“SBAC is not valid, reliable or fair”
What’s left?
Oh yeah, “long, tedious, teaching- and learning-distorting, cheating-encouraging”
All admirable qualities, of course.
oh, almost forgot “profitable”, which is only an unimportant detail that really has no impact on anything.
Actually, SDP, profitable has a huge impact on these educational malpractices. It seems that profit impacts all aspects of SBAC/PARCC. (maybe I should have turned on my sarcasmometer when reading your last sentence, eh?!?)
They don’t call ’em $BAC and PORCC for nothin
Thank you for this. It’s timely as we digest the news of the SBAC opt-out rate in Washington State.
Reblogged this on Exceptional Delaware.
Extra Question: What does “SBAC” stand for?
1. Scientifically Biased Assessment Consortium 2. Seriously Broken Air Conditioner
3. Security Breach Access Code 4. Slacking Business Association Club
“Q3: How is fairness to be assumed when large proportions of students do not have access at home to the technology tools that they are being tested on in schools?”
Unrelated, but the assumption that kids all have networked computers at home seems outrageous to me. I get it, technology is here to stay, and it’s an efficient way to handle a lot of schoolwork. But why do I have to run a One Laptop Per Child program in my own household?
FLERP!:
IMO, you focus on a key point in the posting, i.e., as I see it, the critical importance of out-of-school factors, too often downplayed in the MSM and elsewhere. In this case, the unequal ability of parents to provide the same quality and quantity of tech to their children. If practice makes perfect, then unequal access to the same digital tech at home makes for more or less proficient use of digital tech at school.
Thank you for your comments.
😎
To be fair, these are fair questions by the researchers. Maybe the testing companies have answers to these questions. But there should be answers before depending on these implementations.
I am a big fan of paper and pencil (old Scantron tests). Those conditions can be replicated everywhere although they are not necessarily adaptive (could be solutions for that). I’m assuming the cost of implementation drove solutions to the computer-based methods.
Do any of the other CC testing states have this problem: the lack of standardization because the kids all get different writing prompts? We have six or seven writing prompts for each grade so kids just get randomly assigned whatever prompt. Some prompts are clearly better than others. SO, HOW can kids be compared? I know that the tests are invalid and do no such thing, but if the state is insisting the kids can be compared side by side, how can they say that, even with their own flawed metrics?
I’m sure SBAC sounds fancy in its name, but it will soon lose its value before we know it–due to history of technological glitch and program bugs. It’s like seeing 45-year-old flawed GE-model nuclear reactors sold to TEPCO in Japan without Congress approval.