Yesterday a report appeared by the Center for American Progress asserting that “schools are too easy.” It was widely reported in the national media. See here, here, here, here, and here. For some reason, the media love stories that say either that our kids don’t know anything or they aren’t working hard enough. We have to turn up the pressure, raise standards, make the tests harder, test them more often. And then, when the schools devote every day to testing and test preparation, and when the arts and physical education have been eliminated to make more time for test prep, we don’t understand why kids don’t like school!
Ed Fuller, a superb researcher at Penn State University, was curious about the validity of these findings. He decided to review the Center for American Progress study. He decided that it did not provide evidence to support its conclusions. In a follow-up comment, Ed summarizes his critique thus: Essentially, if one wants to make policy conclusions based on simple frequencies, the take away from the data would be that we need to make math classwork easier because students reporting math work is easy have much greater math scores than students who report math work is difficult. While there may be evidence we need to increase the quality of our curriculum, the evidence just is not in this report or in the NAEP survey data.
Here is his analysis:
Yesterday, the Center for American Progress released a report entitled, “Do School Challenge Our Students?” The essential take-away, in the words of the authors, is that, “Many students are not being challenged in school.” Based on the authors’ analysis of student survey questions from the National Assessment of Educational Progress, or NAEP (http://nces.ed.gov/nationsreportcard/naepdata/), additional findings include:
- “Many schools are not challenging students and large percentages of students report that their school work is “too easy.”
- “Many students are not engaged in rigorous learning activities.”
- “Students don’t have access to key science and technology learning opportunities”
- “Too many students don’t understand their teacher’s questions and report that they are not learning during class.”
- “Students from disadvantaged background are less likely to have access to more rigorous learning opportunities.”
Let me address a few of the many, many problems with this report.
First, the authors missed multiple opportunities to analyze the data and chose to rely on simple cross-tabulations and frequency counts. Anyone who has taken a Research 101 course can tell you, one of the first important rules is to not make conclusions based on frequency counts. More on the missed opportunities later in this post.
Second, the authors’ conclusion that students are not being challenged in school is based on the results from one question posed to students that asked, “How often do you feel the math work in your math class is too easy?” At the 4th grade level, the authors bemoan the fact that 37% of students thought the math work was “often” or “always or almost always” too easy. At the 8th grade level, the comparable percentage was 29%. The authors continue by arguing that this unchallenging work results in far too few students—40% at 4th grade and 35% at 8th grade–meeting the NAEP proficiency standard.
This is problematic because (a) the NAEP proficiency standards are an arbitrary score with little or no correlation with any student outcomes (Gerald Bracey and many others have mentioned the flaw in using NAEP proficiency scores to argue low student performance); and, (b) the authors fail to point out that 46% of the 4th grade students reported that math class work was too easy “almost always or always” achieved proficiency as compared to only 33% meeting proficiency who responded that math work was “never or hardly ever too easy.” According to the logic used by the authors, we would increase the percentage of students meeting proficiency by making math work easier.
Third, the authors report that only 65% of middle-school students reported that they were “always or almost always” learning in their math class. Conveniently, the authors fail to mention that an additional 24% of students that they were “often” learning in math class. In other words, 89% of middle school students said they were “often” or “always or almost always” learning in math class. Sounds pretty good to me.
Fourth, the authors claim that data showing poor and minority students are more likely to report difficulty in understanding teachers’ questions than their more affluent and White peers is evidence that poor and minority students need greater access to more rigorous learning opportunities. While I would strongly agree that poor and minority students need greater opportunity to learn, this data says nothing about more rigorous curriculum. Certainly a more plausible explanation is that less qualified teachers are placed in schools with high proportions of poor and minority students and that poor students often have more trouble understanding teachers for a variety of factors wholly unrelated to the rigor of the curriculum (for example, see David Berliner’s work on the effects of poverty on students’ vocabulary). In fact, none of the data speaks to a more rigorous curriculum other than course enrollment—data that the authors completely ignored.—and that data shows only small differences in the percentage of students saying coursework is too easy across different classes. Essentially the same percentage of students said coursework was too easy in both Algebra I and basic math.
Fifth, the authors could have examined the correlations and scatter-plots between different variables and even employed simple regression analyses using state-level means. Without such steps—as mentioned before—incorrect conclusions can be drawn. Even with such steps, the data preclude any definitive answers from being reached. But let’s take a look at what such analyses would tell us:
–the greater the percentage of students reporting that math class was interesting and engaging, the greater the percentage of students reporting that math work was easy;
— the greater the percentage of students reporting that math work was easy, the greater the percentage students reported that they were learning in class.
Further, all of these factors were positively associated with test scores for both poor students and their more affluent peers. So, one plausible theory would be that students in math classes that are made interesting perceive such classes as easier and lead to students perceiving increased learning and having greater actual scores. However, note that the results of a regression analysis reverses the sign for all these factors after the percentage of poor kids is included, meaning these variables are negatively associated with state-level scores)).
Of course, this was at the state level rather than the student level. The NAEP data doesn’t allow for cross-tabulations at the student-level for these variables which is what the authors would need access to in order make some conclusions based on actual evidence. In fact, in another section of the report, the authors state, “The data should not, however, be treated as causal research, and the responses from students could be skewed by other factors.” But that does not stop them from making some sweeping conclusions and pushing policies to meet needs that are not well understood.
Ultimately, this report appears to have been a bundle of conclusions in search of some supporting data. After performing some serious data contortions and giant leaps of association, the authors made their point. Should anyone listen? Only when we have concrete evidence from some solid research.
Ed Fuller is an Associate Professor in the Educational Theory and Policy Department in the College of Education at Penn State University.
P.S. Ed pointed out to me that the lead author of the study is a journalist, not an expert in statistical analysis.
Essentially, if one wants to make policy conclusions based on simple frequencies, the take away from the data would be that we need to make math classwork easier because students reporting math work is easy have much greater math scores than students who report math work is difficult. While there may be evidence we need to increase the quality of our curriculum, the evidence just is not in this report or in the NAEP survey data.
That doesn’t really follow, as you’re reversing the obvious direction of causality there (kids won’t score higher on NAEP from having easier math classes; it’s that kids are good enough at math to find math class easy end up doing well on NAEP).
Anyway, why aren’t simple descriptive statistics perfectly sufficient here? If we wanted to know whether kids were being challenged in PE class, you wouldn’t need a fancy model: just ask kids whether they’re being asked to do physically challenging things or whether they’re just goofing off.
Of course it does not follow–it is a stupid conclusion which was my point. Without closely examining lots of different pieces of data, anyone can make incorrect conclusions based on a frequency chart.
While perceptions are always important, they are not always accurate. I taught high school math and most of my students hated math and thought it was hard when they started the class. By the end of the class, they thought math was fun and easy even though they had learned far more in my class than in the classes of the previous teachers and the expectations were greater and the coursework more rigorous than previous classes. So, just knowing students’ perceptions of difficulty does not allow you to infer anything about challenge or rigor. The authors may be correct, but the data and analysis in the report don’t necessarily substantiate their claims.
Well, common sense does have to come into play. If 40% of kids think that PE class is way too easy, the most obvious assumption is that they’re telling the truth and could be given more challenging exercises, rather than the alternative interpretation that they all had such a great PE coach that they (namely, 40% of kids) are now Olympic-level athletes who would find any normal PE class easy. It’s not consistent with anything else that I observe to think that 40% of kids are at that level.
Because PE is JUST LIKE math.
It is for purposes of this analogy, which is why your response is not very thoughtful.
Here’s a rewrite: common sense does have to come into play. If 40% of kids think that math class is way too easy, the most obvious assumption is that they’re telling the truth and could be asked to learn more, rather than the alternative interpretation that they all had such a great math teacher that they (namely, 40% of kids) are now champions at math and hence find anything given to them to be easy. It’s not consistent with anything else that I observe to think that 40% of kids are at that level.
[To which I’d add: yes, you’ve identified one possible situation in which student perceptions of “ease” might correlate with a better math class, not a less challenging one. But both Occam’s razor and common sense suggest that your possible situation is not the most common or likely reason that students say class is too easy.]
Moreover, the fact that student perceptions don’t necessarily tell us what goes on in class is a point that the report makes on pages 15-16, which is why they call for more research. It’s a bit misleading to criticize them as if they hadn’t made that concession.
If they know the data is inaccurate and the analysis was sloppy, then they shouldn’t be making conclusions and policy recommendations. Which is why actual researchers don’t write reports based on conclusions from frequency tables and a few cross-tabs on a total of four questions.There are some ethical standards for research–at least for those trained in research methodology. Which is one reason why we have training. The entire point of my post is that they should have never written the report to begin with because the data simply don;t allow for solid conclusions. Just because someone wants to make a conclusion doesn’t mean they should use whatever data is at hand. This is why many of your own organizations reports get slammed.
The report goes out of its way repeatedly to acknowledge the limitations of its evidence. The first two conclusion were that we need high standards and that children need to be challenged to learn (both of which are fairly anodyne and common-sense conclusions, and neither of which require any fancy econometric modeling). And then the final conclusion is that we need for further research and surveys.
Absent a huge ax to grind, there’s not much here to complain about.
Indeed, I’d think that the Ravitches of the world would seize on the opportunity to decry testing and accountability for causing schools to ignore any curriculum more challenging than whatever is needed to get lower-achieving students over the proficiency hump.
Finally, you haven’t yet tried to explain away the report’s other points about what many students say about how little reading and writing they are asked to do. That’s a very specific factual perception, not just whether a class is “easy” or not. Are these students lying or misreporting? If not, why isn’t it a perfectly valid conclusion to point out that they should have more challenging assignments?
“I’d think that the Ravitches of the world would seize on the opportunity to decry testing and accountability for causing schools to ignore any curriculum more challenging than whatever is needed to get lower-achieving students over the proficiency hump. ” But she did not nor did I. Why? Because she and I both realize the data do not support such a conclusion. We need more data and better analyses to understand what kids mean when they answer certain ways and how their perceptions are related to outcomes. We don’t know those answers, thus ethical standards of research call for the researcher to not report any findings. Otherwise, researchers would be writing reports on all the NAEP survey responses (which no researcher does because the data don;t let you control for any confounding variables and you cannot even run appropriate cross-tabs. Its just not data that anyone can use to substantiate claims. Period.
Think of it this way, Ed. Suppose you tested all the children in a school on their ability to jump over a four-foot bar. You learned after careful observation that 41% of the children were able to clear the bar. A researcher or journalist writes an article and says that it is “shocking!” that 59% of the children were unable to jump that four-foot bar. Shocking! What will our society do about making sure that every child can do it. AFter all, this is really important and we must have rigorous standards. So the cry goes up in the land that we must raise the bar. Some will say that there are children in wheelchairs who will never be able to do it, and children on crutches, and children who don’t see why it matters. But, no excuses! The bar must be raised. That’s where we are today, with “reformers” in full cry, demanding “standards” that they themselves could never meet. I have a simple solution. They should take the tests they think are so easy and publish the results.
Can you specifically address the question in my last post? That is, why can’t we conclude anything from how little writing/reading some students say that they are asked to do?
(What possible use would “confounding variables” be as to that point? If students aren’t being asked to do any meaningful reading/writing, why isn’t that a bad thing completely independent of any other variables about their backgrounds? Confounding variables would arise if one wanted to make a causal claim about their performance on NAEP, but not if the goal is simply to make a normative claim about what curriculum should entail.)
Diane, it’s more like 40% of kids reporting that they’re not being asked to jump over any bar at all, or maybe a bar that is three inches off the ground. Granted that you can’t automatically assume they’re all telling the truth or that they all know what they’re talking about, but why wouldn’t you want to know that?
Moreover, your analogy doesn’t add up even on its own terms. The report doesn’t suggest raising the bar as in making NAEP more difficult; at most, it suggests that we need more study as to whether a more rigorous curriculum might better prepare kids to pass NAEP. By analogy, that would be like saying that if kids are failing to jump over a bar that is 2 feet high, and if many of them also say on a survey that they are almost never asked to jump for practice, then maybe we should do some research on whether to include some jumping practice once in a while.
Stuart, I was on the National Assessment Governing Board for seven years. I know the NAEP tests very well, and I know how the standards are set.
The decision about where to set the cut scores is a judgment, it is not science. The “proficient” level is equivalent to an A.
If you think that almost all children should be able to earn an A, you are advocating not for high standards but for grade inflation.
There has never been a time or place where most children earned an A, unless the standards were very low or almost all the children were exceptionally smart and pre-selected.
The changes in performance on NAEP math in the past 20 years have been nothing less than phenomenal. Anything who things that the tests are “too easy” doesn’t know the tests. They are not easy. And anyone who thinks that American kids are not doing well in math has not looked at the NAEP reports.
I report: the improvement in NAEP scores for students in every group: white, black, Hispanic, Asian–has been dramatic.
Well, that’s all fine — I’m glad to know that 2 decades of more choice and accountability have seen such progress — but it’s not addressing the point that I raised. Namely, if lots of kids are reporting that they aren’t being asked to do much reading/writing at all, isn’t it at least conceivable that maybe, just maybe, someone ought to elicit more work out of them?
I cannot reply to comments further nested than this one.
Was the question about “pages read in a day” :
– How many pages do you read each day?
or
– How many pages are you assigned to read each day?
They’re totally different questions.
In my daughter’s school, all kids are assigned at least a half hour of reading for homework in addition to classroom reading. I don’t think it will surprise anyone to hear that there’s not 100% compliance.
I can’t imagine a day of school going by without reading 5 pages of something. You might also wonder if the kids thought the question meant only novels or if it included reading textbooks and other materials.
See? Simple!
Thank you for your analysis.
The lead author of this report is described as formerly being, among other things, research director for Education Week. That’s a national newspaper that among other things, regularly reviews and shares research. Having coordinated and directed a variety of research projects over many years, I know that researchers don’t always agree on appropriate research methods. That’s not to say all research methods are equally effective. It’s not to say I agree with the conclusions and recommendations, both of which I want to read more carefully over the weekend. But I think the lead author of this paper, Ulrich Boser, has some research credentials.
Ed Fuller is an expert in research methodology. He teaches it. He says the research design for this study was amateurish. It is risky to put out a report, seek national attention, and not get it vetted by people with a research background who can point our your errors before you go public.
Fuller is an expert, but for all of his indignance, he hasn’t actually come up with any serious errors.
The research design for this study is simple and there’s nothing wrong with it at all: Look at a survey, report what the survey says, concede all over the place that students’ answers to the survey might be misleading, and call for more research.
Thank You for getting to the truth of this! I am currently attending a teachers workshop and printed out your post to share with the teachers there. Perhaps also encouraging them to also access Diane’s site for the “real” stories, beyond what I share with them. It is amazing to see the erroneous results of the NAEP data being disseminated in the media and realize I will be quoted this report at our back to school professional development meetings! I will make the meeting shorter by forwarding this to all the peers I respect in our district so we will know the truth! Directly rebutting incorrect information, no matter how tactfully or politely, is a waste of time as group think in the norm!
I just wanted to let you know that you were quoted favorably on Fox News’ “Red Eye” last night in regard to this story. It was about halfway through the show. Andy Levy, one of the co-hosts who is also a libertarian, described you as a former Bush DOE person who initially supported NCLB but now speaks out against it due to the act’s encouragement of teaching to the test and states dumbing down standards to make it appear as though their students are improving. He stated that he agreed with you on that. Unfortunately, one of the guests took that as a cheap “blame Bush” attack, but at least the message got out there.
Not necessarily related to math but rather to language arts or maybe social studies or even a ‘character education’ course:
Who first said the following quote?
“There are three kinds of lies, damned lies and statistics.”
A. Albert Einstein
B. Richard Feymann
C. Stephen Hawking
D. Ian Hacking
E. Benjamin Disreali
F. None of the above, it was _________________.
I’m not sure into which of the three categories that the Center for American Progress report falls. What do you think?
Under which category would a question like this be placed on a ‘standardized test’ language arts, math, social studies or something else?
The above post was meant to spoof on the whole process of using standardized test results to say anything about the teaching and learning process. Logically, if the process involved is not valid, and standardized testing is irrevocably invalid-see Wilson’s “A Little Less than Valid: An Essay Review” at: http://www.edrev.info/essays/v10n5index.html , then any conclusions drawn from said process will be flawed and invalid or as Wilson himself states “all else is vain and illusory”. Now every now and again one might get a valid/correct conclusion, just like the proverbial blind and anosmic squirrel can find a nut, but that will probably (can’t get into the mind of the squirrel just like we can’t get ‘into the mind’ of a standardized test taker) be by chance only.
The question is meant to show some of the fallacies involved in making a standardized test question. Notice it could be a hybrid multiple choice and short answer response question (in what level of Bloom et. al’s. taxonomy I don’t know and don’t care as Bloom himself stated that he didn’t believe that the taxonomy should be used the way it is being used, not to mention that the taxonomy has internal contradictions-see Cherryholmes deconstruction of it in “Power and Criticism”). The question has factual errors-two spelling mistakes. Can you spot them? And possibly more than one answer depending on the knowledge base of the test taker. But since there can only be ONE CORRECT ANSWER, at least according to the test maker, what is the correct answer? Who knows? (Other than the test maker-ha ha jokes on you-both the test giver and test taker).
Folks, there is a reason why that the test givers-monitors/proctors-aren’t allowed to read the test. And it has nothing to do with ‘test security’. It has everything to do with the fact that there are so many flawed test questions (one of the more infamous-talking pineapple being the most recent) that that fact (flawed questions) alone would make the test invalid. And there are many more errors in the process, see Wilson’s “Educational Standards and the Problem of Error” to be found at: http://epaa.asu.edu/ojs/article/view/577 .
I had to give a SAT9 test one year (I’m fortunate enough to teach a subject-Spanish-which is not tested so I’ve not had to personally confront my conscience about having to abuse students with giving them a standardized test). I refuse to give a test without reading it to understand what the students are experiencing. Yes, I know that it is supposedly unethical for the test giver to read the test in order to prevent cheating but if the process is unethical, and standardized testing is completely unethical, then counteracting it can only be ethical. I found so many errors in the questions, at a rate of over 50% for the math section and no less than 25% for the other tested subjects. It was a disgrace to have to subject the students to such nonsense.
Two examples. One showed a diagram of an American football field with a graph of the distance the ball traveled per second after kickoff. The multiple choice question asked something to the effect of (It was only about a dozen years ago or so so my memory on the exact wording may not be the best but it suffice to show the flaw in the question): How many meters did the ball travel after 2.5 seconds? Did you catch the mistake? Answer below.
The second was this short response answer question: Make as many math sentences as you can using the whole integers 0-9. Now I had to go up to a math teacher and ask “By ‘math sentence’ did they mean equation?” Yes, was the response. So there is one problem as not all students would have known what a ‘math sentence’ is. But the main problem is that there is, far all practical purposes, an infinite number of correct responses. So Sara, the ever compliant students starts writing away 1+1=2, 2+2=4, 5-3=2, etc. . . . Wow this is fun she is thinking and continues to merrily write as many ‘math sentences’ as she can. DING, times up for the math section, please close your test booklet, you will now have a five minute break”. “Wait a minute” Sara is thinking (can’t voice her concern or she’ll be punished) “I didn’t finish the section”. And her corresponding score is “inadequate” even though she got all the previous questions correct. Ha Ha-jokes on Sara!!
Answer: In American football the distance of the field is measured in yards not meters.
Test question should read “There are three kinds of lies, lies, damned lies and statistics.
Please excuse any and all other errors in my posts. It’s hard to be my own editor. Wish there was an edit function.
Popularized by Mark Twain, and then attributed to Benjamin Disraeli. Is there a prize?
Research shows that I need a raise because I did a simple count of dollars in my account and my conclusion is I make too little money.
But seriously, any semi-educated person knows that any semi-educated person can take statistics and twist them in any way that seems convenient or fits into an agenda. It is obvious that the agenda here was to push for more standardized testing instead of telling the real story. Thank you Ed Fuller for disassembling this piece of disinformation for us.
There are lots of ways to interpret this data. Dr. Fuller suggests…”the take away from the data would be that we need to make math classwork easier because students reporting math work is easy have much greater math scores than students who report math work is difficult. While there may be evidence we need to increase the quality of our curriculum, the evidence just is not in this report or in the NAEP survey data.”
Another possible interpretation is that some students are ready for more challenge. It was, after all, less than half the 4th graders who reported that math was too easy.
I also was impressed with John Merrow’s recent PBS stories about how a Texas school district has been finding greater success by having predominantly low income, Spanish speaking students taking more challenging college level courses.
http://learningmatters.tv/blog/on-pbs-newshour/watch-early-college-hs-in-south-texas/10190/
Sometimes students rise and meet greater challenges. And the Merrow PBS story suggests one way this could be done (I’ll also be writing later this week about a rural Minnesota school district in which 2/3 of the high school students are taking college level courses while, and some of them are simultaneously earning AA degrees and college diplomas.
I would generally agree with you. And my conclusion is obviously wrong-headed, but was stated to show how the data can be misinterpreted.
Joe,
Thanks for your reply to my late night post on another topic. I checked out your website and see that you will now be affiliated with Edvisions. I have many questions concerning what you have been doing and what Edvisions does. Would you please email me so that I can find out more without clogging up this wonderful blog. My email is: dswacker@centurytel.net
Thanks!!
Duane
What’s interesting to me is the unchallenged, implicit assumption that the best way to organize learning is by batching kids by age and grade level and then being surprised that some have a hard time learning the content and others are bored. Demanding “tougher” classes is idiotic – not because we don’t want to have people learn a lot, but that somehow its the only answer to kids not doing well on some exam given once a year. Let’s not get into just how much math EVERY single child should know irregardless of their passion and interest areas.
I believe the point of this is to uncover the batching fallacy – a system designed expressly to sort out those who “can” from those who “can’t”. The most efficient way? Make the learning time-bound and consistently paced. Now we issue reports bemoaning that kids aren’t prepared or are bored, yadda, yadda, and that the answer is more of the same. Form follows function. Until we address the function of education within the context of the world we face today, we will continue to fight about which non-solution to apply this time around. Thanks for the analysis, Dr. Fuller, and for posting this Diane.
In addition to the vocabulary deficit experienced by many children in low income families (now about 25% of all children in America), is that many children are English Language Learners–if English isn’t their first language, they probably DO have difficulty understanding their teachers–both during presentation and questioning.
My experience is that unless the instructional program is truly lame, smart kids will learn more under almost any circumstance; many students who report being “bored” are actually disengaged and not meeting the challenges that are presented; and that good teachers teach at a high level and come up with creative accommodations for less capable students. I detest tracking and find it entirely counter-productive. The thing that most impacts all engaged students and those students trying to learn and participate is the disruptive students. Disruption is ruining our classrooms.
Thank you, Dr. Pickering. Isn’t it time to look into the impact of having all 7-year olds in 2nd grade and all 10-year olds in 5th grade. Maybe we could take a llok at flexible grouping across traditional grade lines.
Of course, by now, in most schools and school districts, so many changes have been wrought that no one will be able to sort out what worked and what did not. The lack of any meaningful baseline data and any scientific approach to transformation is so frustrating to teachers. Pile on more initiatives. Maybe something will work.
Anyone? Anyone? Bueller?
Having helped start, and having youngsters attend urban district public schools that were not arranged by grades, I agree that this should be an option. I’ve seen lots of youngsters benefit from having the same teacher two years in a row. As others note, there’s nothing magic about having what Don Glines, a Minnesota educator, used to call “Self contaminated” classrooms.
I have heard from several foreign exchange students that American high school is too easy. In my opinion, it is certainly too easy for most identified gifted students.
In 2003 our district surveyed students who were identified GT and we found that “Overall, ten percent of participating students gave the impression of being generally challenged, meaning, in most subjects. Very few students said they were challenged because of hard work, new content, or critical thinking ability, things many wished for.. Across all levels math was named the most as challenging, but most comments pointed to ineffective instruction.
Fourteen percent undeniably said they were not challenged at all. This figure parallels that of findings that between ten and fifteen percent of gifted students are thought to be underachieving (Gallagher and Gallagher, 1994).Just showing up in class guarantees students a passing grade. Some noted they are not challenged any more than any other students in the class…If students who said to be challenged in just one subject are added to this category, it may be cautiously concluded that about a third of GT students are not, or only barely challenged”.
More important than determining how many students are challenged or not, is to find out why or why not. Let’s hear what students have to share about their education. How many teachers even ask them?
From our GT survey:
“Students said math is not explained well, or feel that the teacher expects them to learn the material on their own. A few think they are not smart enough. Some students in advanced math such as trigonometry say the class, at times, moves too fast. Others who understand new material quickly feel slowed down when the teacher has to explain things more than once to other students.
..They like work that makes them think. High school students are pleased with the option of AP classes, saying it reduces boredom because classes are more interesting and require more thinking. Middle school students too enjoy thought provoking work in advanced classes.
About twenty percent of all GT students perceived various obstacles as a challenge to learning. Obstacles increase in number as students progress through the grade levels. One third mentioned too much work, having to do work that is too easy or which lacks interest.
Students said it is difficult to do work which is not relevant or interesting. Worksheets and watching movies were given as examples of busywork, something they say challenges their time more than their intellect.
When teachers give too much work students are inclined to just memorize the material rather than really learning it or producing quality work. Some have a hard time doing homework while also working a job or taking evening classes. Students said it is difficult to motivate themselves for classwork or homework that is repetitious.
One fourth named time management, study habits, their own high expectations [perfectionism] and lack of organizational skills as a
challenge to learning. They procrastinate and put things off until the last minute. Some believe they are lazy. They do well in class, but do not like to study at home or do homework. Others strive hard to get straight A’s but feel they do not have enough time for extra curricular activities besides their schoolwork. Some have trouble with organizational skills.
One fifth mentioned lack of explaining, poor instruction or lack of respect by teachers. To them challenge included work that is difficult because of lack of clarity in explanations.
Students at all levels also want help in dealing with their often high personal expectations [perfectionism] and lessened motivation. They prefer independent learning as well as more independence, and are aware they may need help with time management and organizational skills. Some wish teachers would give them reminders. Students also mentioned hands-on learning, home projects, self-assessment and time for thought. They want more time for social connecting and more time with counselors. Options and variety are deemed important. Some students said they would like the “D” grade re-instated (this has since become a reality).
They want work to be interesting and to have more time for it to allow in-depth learning and/or develop a quality product. They also feel differentiation, feedback from teachers and class discussions will help them be more successful.
Thanks to Ed (Fuller) and Diane Ravitch, among others for promoting this conversation. Here’s a link to a newspaper column I’ve just written that cites this discussion. The column will appear in a number of Minnesota suburban and rural newspapers over the next week.
As you’ll see, I agree that several of you that setting higher standards won’t necessarily help students who already are not doing well on the NAEP test. I do think we should use student views to help assess what;s happening in public schools.
Yes, when I was a urban public school teacher and later a college faculty member, I did this. I helped set up and worked in a K-12, 500 student district option where we used feedback from students, faculty & families to help assess our strengths and shortcomings. Anyway, here’s a link:
hometownsource.com/2012/07/19/are-schools-doing-enough-to-challenge-all-students/