In copying the response of Hart Research, I inadvertently copied only part of Guy Moyneaux’s comments.
Here is his full response:
TO:American Federation of Teachers
FROM:Guy Molyneux, Hart Research Associates
DATE:May 10, 2013
RE:Methodology for Common Core Survey
Following are some facts about the methodology for AFT’s recent survey of AFT K-12 teachers on Common Core implementation that may help to answer the criticisms and questions raised by Mercedes Schneider.
Schneider’s objections speak to two distinct questions: 1) does the survey reflect the views of AFT K-12 teachers?, and 2) if so, can the AFT results be extrapolated to all U.S. teachers? The answer to the first question is “yes,” for reasons explained below. The answer to the second question is “not necessarily.” When Randi Weingarten refers to what “teachers” think about the Common Core, she is referring to AFT teachers. This shorthand is not meant to deceive anyone; if it were, the press release and various poll materials would not have stated so clearly and repeatedly that the survey was conducted only among AFT members. (Indeed, even the quote highlighted by Schneider mentions “a recent poll of AFT members.”)
In fact, it is likely that a survey of all U.S. teachers would report results broadly similar to what we found among AFT members, for reasons explained below. However, it is true that we cannot be sure of this unless further research is done among non-AFT teachers. Such research would be welcome.
• The survey employed a standard sampling methodology, used in countless surveys by many polling organizations. On behalf of AFT, Hart Research Associates conducted a telephone survey of 800 AFT K-12 teachers from March 27 to 30, 2013. Respondents were selected randomly from AFT membership lists. This process of random selection produces a representative sample, allowing us to generalize from the survey respondents to the larger population being sampled (in this case, all AFT teachers). There is nothing unusual or controversial about this method.
• A sample size of 800 teachers is appropriate and common. Schneider notes that “AFT/Hart only surveyed nine one-hundredths of a percent of the AFT membership (.09%),” and adds for emphasis: “Please don’t miss this. AFT did not survey even 10% of its membership before forming an opinion of teacher acceptance of CCSS.” In fact, a survey sample size of 800 is reasonable and quite common: for example, most national media surveys interview between 800 and 1,000 registered voters. Moreover, researchers understand that survey samples are not properly evaluated as a percentage of the underlying population. By randomly selecting respondents, a relatively small sample can provide an accurate measurement on a much larger population. If Schneider’s 10% standard were correct, pollsters would need to interview 20 million U.S. voters to conduct a single survey of registered voters. Needless to say, not many surveys would be conducted.
• A reported margin of error of +/-3.5 percentage points does not indicate a lack of precision or poorly written questions. Schneider asks “How is it that a research firm only handling 800 surveys cannot get a more precise reading of the data than this? [a +3.5% margin of error]” and notes that “error is introduced in a lack of either question quality or precision in answering format, or both.”
The margin of error reflects the possibility that any single survey sample will not be perfectly representative of the full population. In this case, there is a 95% chance that a survey of all AFT teachers would yield results within 3.5 percentage points of those found in this survey. Schneider is correct that this means that AFT teachers’ approval of the Common Core State Standards could be as low as 71% or as high as 79% (and a 5% chance the proportion is even higher or lower). The margin of error has nothing whatsoever to do with question wording.
• The survey sample is demographically similar to the population of AFT teachers. In terms of age, gender, school type, and other demographic factors, the survey respondents closely resemble the larger population of AFT teachers. This information is available to anyone upon request. Schneider guesses that 95% of respondents reside in New York State, and criticizes the failure to disclose this “fact.” In reality, 36% of survey respondents live in New York, reflecting the geographic distribution of AFT members. As it happens, approval of the CCSS is actually somewhat higher – 82% – among AFT teachers outside of New York.
• A demographic breakdown of the survey sample, and precise question wording for all questions, is available upon request. Schneider claims that “Weingarten presents the results of her survey in suspiciously general terms” and faults her failure to provide comprehensive demographic information “at the outset of the study.” These survey results were presented not in a refereed academic journal, but in a simple Powerpoint slide show designed for a lay audience. There is no obligation to burden readers with exhaustive methodological details there. What is required is disclosure of this information upon request. The AFT does that. Schneider could have received answers to many of her questions – and saved herself a lot of time – by sending an email.
• It is likely that non-AFT teachers have similar views as AFT members, but we can’t be sure. AFT teachers are not demographically representative of all U.S. teachers: for example, they are more likely than average to teach in urban school districts. And of course they are union members.
However, the survey reveals support for the CCSS that is generally similar across most relevant demographic categories. For example, within AFT, 76% of urban teachers and 73% of non-urban teachers approve of the CCSS. For that matter, 71% of urban teachers and 78% of non-urban teachers share the worry that they will be held accountable for results on new assessments before instructional practice is aligned with the new standards. In general, the outlook of urban and non-urban AFT teachers on these issues appears to be more similar than different. The same is true in terms of region of the country. So it is likely that a survey of non-AFT teachers would yield similar findings. However, we can’t know that for sure without further research.
1724 Connecticut Avenue, N.W., Washington, D.C. 20009 202-234-5570 http://www.hartresearch.com
Even if teacher support common core standards, new materials, training, and tests cost a big hunk of money.
Ask the majority of voters if they support the common core. Run a levy to pay for them and find out the answer. America is broke, dream on common corons.
LIKE
Much better detail.
Still problematic:
Only 800 teachers polled. Most are from New York. Results limited to AFT states on a national issue. This limitation should be underscored when communicating results.
“75%” rather than 600 respondents still misleading to public.
Margin of error is affected by the quality of questions, including the types of responses those surveyed are allowed to give. Yes/no questions often trap respondents and therefore contribute to error. Also there is potential for “leading the question.”
Subgroup error still a mystery.
Not a referred journal, but used for high-stakes decisions. Detailed demographics should have been provided. link for study details should have been a part of Power Point.
What about number of calls that were “hang up”? These can be very telling. Phone surveys have incredible potential for problematic bias. Just ask President Dewey.
Okay, Now I give Hart a solid C.
The margin of error comes from the fact that this was a sample of AFT teachers, it does not have anything to do with the quality of the questions.
Think about this thought experiment: if Hart called EVERY AFT TEACHER IN THE COUNTRY, there would be no sampling error. This would be true if they had great questions, terrible questions, any kind of question. They would not have a sample, they would know the answer from the entire population.
Exact confidence intervals include both sampling error and non-sampling error (measurement bias). If approximate confidence intervals are used, then margin of error is only related to sampling error.
Even if approximate confidence intervals are used, measurement error is still an issue. Survey questions requiring dichotomous responses where dichotomous responses are not both clearly exhaustive and mutually exclusive are open to error due to limits on a respondent’s ability to answer.
There seems to be some confusion here about the nature of sampling error. In your post criticizing the study, you said
“Let’s consider the noted margin of error. Frame 2 includes a comment about a +3.5% margin of error. For 800 participants, this is in effect saying, “For general survey respondents, any result we report could be off give or take 28 people.” So, if 75% of teachers “favor” CCSS (600 teachers), the actual number could be anywhere between 572 and 628.
How is it that a research firm only handling 800 surveys cannot get a more precise reading of the data than this? Error is introduced in a lack of either question quality or precision in answering format, or both.”
The survey firm is confident that 600 or so of the 800 people they called “favor” CCSS. What they are unsure about is if randomly asking A DIFFERENT GROUP OF 800 AFT teachers would also result in 75% support or something more or less. It has nothing to do with “reading the data”, nothing to do with question quality or precision in answering format. This error is introduced by taking a sample from a population and acknowledging that the sample level of support is not necessarily the population level of support.
If Hart used approximate confidence intervals, then they are operating on the assumption that the outcome is normally distributed in the population ((all of AFT). The problem here is that the outcome is measured using a dichotomous variable. It cannot be normally distributed.
Last I’ll write on this: If pop normally dist, true pop value falls between 572 and 628. But pop not norm dist bc outcome dichotomous. This violates assumption necessary for approx confid intervals (calc of margin of error absent meas error).
Even if pop norm dist, question quality problematic. Reliability suspect.
Mercedes, you’re my newest hero.
Thanks, LG. I updated my post with a brief discussion of the margin of error formula used where responses are categorical. Response on the survey instrument is part of the formula: http://www.ehow.com/how_5276026_compute-margin-error-easy-methods.html And a +/-3.5% is the high end for a sample of 800 (95% CI).
I ran my own “scientific” poll of teachers at the “Badass Association of Teachers” on Facebook. Results: 22% support while 78% opposed. I know that site is angry teachers rebelling so no claim as to validity and certainly not conducted as a professional poll. But every teacher I speak to opposes CC, even if MOST support the basic idea. They are opposed because of the way it’s being used against teachers.
Demographics sorely missing on teachers polled (subjects taught, gender, ethnicity, years teaching experience, locale).
And researchers don’t bother to explore the issue of how teachers can “support” CCSS when so many also admit confusion about CCSS.
There is so much room for improvement on both the conducting of this survey and the presentation of results.
Wouldn’t the demographics you list be represented in the sample at the same rates that they’re represented in the pool from which the sample is drawn, i.e. the full AFT membership? Isn’t that the premise if random sampling?
Ideally, yes. But Hart did not publicize detailed demographics with the survey, so survey readers really didn’t know much about whom they were reading. In their letter, Hart notes that more than one-third of AFT members are from New York. So, in a random sample of 800 members, approximately 267 are from a single state.
Keep in mind that when Weingarten publicly uses that “75%” stat for CCSS “support,” she is not following it with any caution about who that 75% actually is. And even a random sample can randomly poorly represent the population from which it is drawn, especially if the sample is relatively small compared to the population. The smaller the sample relative to the population, the more likely it will be idiosyncratic, even if randomly selected. That is why it is important for researchers to both examine and publicize population and sample demographics AND to clearly discuss potential sample idiosyncrasies as limitations to the study.
But if 1/3 if aft members live in NY, then it’s appropriate to have that reflected in the sample, no?
The stakes are too high to allow a sampling of 800 to be called “representative” of all the AFT teachers in the USA. I don’t know of anybody, personally, who is for the CCSS. Especially considering the way in which it’s being forced on us from an outside entity in such a hurried manner.
It’s a very, very big deal to make a move to a national consensus on assessment, rubrics, and curriculum. Huge. There’s even a serious argument against it on constitutional grounds. To use a sampling of such a small fraction of our rank and file as an indicator of our consensus is simplistic, to say the least.
Why not get a true reading through a ballot initiative? Simple questions sent to the full AFT membership? Seems to me that this would be important enough to warrant that kind of action. Have independent auditors, teachers, and administrators present in the tabulation process to ensure validity.
Unless, of course, it’s been a done deal from the start…which is what many of us believe. Nobody asked me about my feelings. Nobody asked any of my colleagues. Randi’s right: we do feel unprepared. Randi’s wrong: among my set of colleagues; we don’t support the CCSS. We haven’t even had a chance to understand them and don’t understand their necessity.
CCSS does not have more sway in NY over the other 44 states that have it, so no, NY should not have influence in this survey.
“It is likely that non AFT teachers have the same views as AFT teachers, but we can’t be sure.” This kind of assumption ought not be included. For one, non AFT teachers might include many charter school teachers, and they are not held to the same standards as community public school teachers.
If you “can’t be sure,” don’t write the comment.
They can spin all of their data that they want, but the truth came out at this meeting:
http://tinyurl.com/bpc8asy
Read the comments.
SPIN is exactly correct….Anyone can juggle the data to make it fit…
Rubber Band.Data!!!!!
More Bull..
Great article, thanks.
Data. Data. Data…
We measured the teacher’s opinions about the alleged “data overload” and the data we collected shows that the teacher’s approve of our system that is driven by intense data analysis.
Data. Data. Data….
Please.
IME, it’s not about the data. It’s about relegating teachers to the backseat and telling them to shut up while you drive. You don’t know the roads and your maps/ATS are too simplified to get you to where you really need to go.
I don’t trust the polls, now. There are so many of us who aren’t willing to answer anything honestly for fear of being targeted for speaking our mind. I worry about being punished for what I say here.
And regardless of the opinion that 800 teachers is an accurate sampling ratio: it doesn’t matter what the number crunchers say. It’s not. Give me a break. The fact is that RANDI believes in the CCSS. So she’s happy to say that we do, too.
“Data. Data. Data….” and then I soiled myself!
Not once has anyone asked the teachers I work with and teachers across our state what we think. We are constantly told what to do. There is very little trust and everybody watches their back. Great environment for children, teaching and learning.
Most surveys are over once you don’t answer the first question to support their POV. They say thank you…we are done.
I tell them that when they pay me in advance at a rate of $25/question then I will take part.
“There is no obligation to burden readers with exhaustive methodological details there.”
I find this not only insulting but highly suspect in the area of “validity.” Granted, a great deal of the general public is made up of arm-chair statisticians, but even those of us who have taken a research course here or there can see the possibility for errors in this type of study. Yes, we need the details.
“It is likely that non-AFT teachers have similar views as AFT members, but we can’t be sure. AFT teachers are not demographically representative of all U.S. teachers: for example, they are more likely than average to teach in urban school districts. And of course they are union members.”
The credibility of this organization just went out the window with, “…but we can’t be sure.” If you expect to be taken seriously, why would you even conduct this poll on such flimsy “sampling?” Should we choose our elected officials on the first .01% of votes that we count?
“So it is likely that a survey of non-AFT teachers would yield similar findings. However, we can’t know that for sure without further research.”
One research study does not a conclusion make–there need to be independent reproductions of this study with similar results in orders make a valid claim.
Obviously, Weingarten is playing politics again. Why did she have to ask Hart to defend her comments? She should be able to do so since she’s attached her name to this “research” in her propaganda letter. If it looks like a duck, and walks like a duck…
I would ask simply: “What does it matter if teachers “approve” of the CCSS or not?” We have no choice in the matter. We were told that our state had signed on and that we WOULD be teaching according to the CCSS and our children WILL be tested on the CCSS and our pay and job security DO hinge on those test results (to the tune of 50% of our evaluations here in FL).
The only purpose in doing a survey of this kind seems to be to justify Randi Weingarten’s selling out of the AFT members (of which I am one) and allowing her to state publicly that her memberships backs her, whether we do or not. Just like her call for a moratorium and whinging about lack of resources and “training” are an after the fact giveaway to the reformers that have no teeth and will result in no action, as we all know oh so well.
The time to have conducted this survey would have more appropriately have been before the CCSS were mandated, through RTTT, in the 47 states that adopted them. If teachers had been invited to explore the standards and ask questions about implementation and resources then and actually had the option of saying “no” then this survey might have had some real world meaning. As it stands it does not.
I sadly remember when Rudy Giuliani tried to impose merit pay on teachers in NYC and Randi was willing to go to jail and call a general strike to prevent it. Boy how things have changed in the last 15 years! Now Randi negotiates heinous contracts, signs on to CCSS and higher-stakes testing, gives credence to VAM, dithers about bar exams for teachers (when we have had the National Board Certification process in place for years and years), implying that, yes, teachers really aren’t capable at all, and now she is trying to use us to support the CCSS reform movement.
With “leaders” like that we have no need to fear the reformers. Randi and Dennis Van Roekel (NEA) have already sold us all to the highest bidder and neither is interested in or capable of leading us in protecting our beloved profession and public schools. Both will have very comfortable sinecures when they leave their current positions — how could they possibly care about those of us who are victimized by their “go along to get along” policies?
I’m a NYC teacher. I don’t support the common core. The 200 colleagues in my building do not support the common core. Poll us!
Common core will die. It does not have teacher buy in. It is unproven and untested. We weren’t consulted in the development. This is insulting to our profession. We do not support it.
Sampling is fine, it’s the question that’s the problem. Teachers support common core as it’s defined in the study, which is vague:
“a set of academic standards in English language arts and math for students in grades K-12 that have been adopted in most states”
Who would be against that? The bigger problem is in the implementation of it. There, teachers have some very serious reservations. http://www.aft.org/pdfs/press/ppt_ccss-pollresults2013.pdf
74% of teachers are “worried” that implementation will precede understanding. Only 27% feel they’ve been properly prepared. 73% worry that rushing into new assessments means that testing and test prep, not teaching and learning, will be the focus of implementation. 88% want to require school districts to reach an agreement with the union representing teachers on a plan for implementation of the Common Core standards, and 83% want a moratorium on consequences. Oh, and take a look at the “Report Card”: districts get a “failing grade.” So, the headline from this study could just as easily read: “Common Core” gets failing grade from teachers. But it doesn’t. In fact, other than the general “in favor of” question, teachers express a negative or skeptical opinion toward Common Core very consistently.
Hardly a ringing endorsement on a proposed policy–far from it–but I think the study tells us something. Teachers in the study are in favor of being involved in developing and implementing academic standards; they are in favor of “a” common core, just not this “Common Core.” Teachers want standards in which they can participate developing, for which they are adequately prepared, that don’t increase the emphasis on testing, and that don’t carry high-stakes consequences. BTW, they survey only asked about a 1 year moratorium. What do you think teachers would say if they could define the length of the moratorium on high-stakes consequences? The under/over on that is when hell freezes over. I’ll take the over.
Sampling is highly problematic. Not publicizing the details of this study, including questions and data collection procedure, is also highly problematic.
They say that you can request this information if you want it. I suggest you do, and then do a follow-on analysis.
If I had a staff, perhaps I would, Flerp. My goal in writing about the study was to bring its limitations to the teaching public’s attention, and I think I have done as much.
What were the questions? What wording was used in the questions? Seems to me that one needs that information to begin to understand the results.
I have emailed them requesting the questions.
It is a misleading survey…..less than 10%. What exactly were the questions posed??
I too studied statistics, and like anything else, they can be skewed. Which part of NYS was sampled for instance? How much of it came from NYC?? And, would those same 800 teachers feel the same way now??
I had a comment on my blog this morning, and I would like to reproduce it here along with my response because I think it brings home a important issues concerning the survey sample and Hart’s unwillingness to readily offer demographics:
Here is the comment:
“Not that I disagree with your analysis but, just to set the record straight, I live and teach in San Antonio, TX, and I am an AFT member. There are plenty of AFT members in TX that don’t live in Houston.”
Here is my response:
“Thank you, Annie. The problem is that quality research readily provides demographic information. Quality researchers don’t tell their readers, ‘You could have asked me for this info.’ They provide it from the outset.
I was left to seek demographic information on my own. Even in their rebuttal, Hart did not include detailed AFT membership information by locale. This is poor form.
Hart did admit that over one third of respondents (36%, or approx. 267 out of 800) were from a single state: New York. Thus, the survey result will be biased toward New York.
If AFT is present in 31 states and NY has 267 out of 800 teachers surveyed, that means that the remaining 533 represent 30 states. That is 17 or 18 teachers surveyed PER STATE. And what if AFT is present in all 45 states using CCSS? Then that’s worse representation for the remaining 44 states (excluding NY): 12 or 13 teachers surveyed PER STATE.
75% of 17 or 18 teachers is 13 teachers.
75% of 12 or 13 teachers is 9 or 10 teachers.
There is no power in reporting, for example, “AFT surveyed teachers across Texas and found that 13 support CCSS.”
So, even if AFT is all over Texas, representation on this survey remains suspect.
Keep in mind that I have calculated averages. This means that for any state where Hart surveyed more than average, they had to survey less than average in another state.
It is easier to hide such sketchy surveying behind percentage reporting (“75% of AFT teachers support CCSS”) than it is to report exact numbers. Hart is not readily offering its exact numbers. Neither is Weingarten.”
I’m sorry: 36% is 288, not 267.
If 30 other states surveyed, still an average of 17 or 18 teachers.
If 44 other states surveyed, an average of 11 or 12 teachers surveyed.
I corrected the numbers on the comments section of my blog:
I don’t know any NYC teacher who was surveyed, but it’s hard to believe they want CC. They are under so much stress since Randi gave back many of our rights under the contract including the right to grieve a letter. They didn’t even vote in the UFT election. Less than 25% of active members voted (down from 30% in the last election)—that speaks volumes. The win for Randi’s hand-picked successor came mostly from the retiree votes.
Hart has not disclosed detailed demographic information, which should include teaching status (retired or active) for all respondents.
Readers really need these details in order to weigh the survey outcomes.
I wrote a follow-up post in which I include the AFT survey instrument and discuss polling (with references to Gallup):
It’s a self serving question in my opinion. Most teachers I know support CC’s basic idea. It’s about who developed it, how is it being used, what input was used from the professionals (None?). Seeing that it is being used to rate schools, students, and punish teachers there is a rebellion brewing. So I do think when put in this context there is little support for CC.