Pasi Sahlberg and Jonathan Hasak wrote a post about the failure of Big Data to introduce effective reforms into education. Big Data are the kind of massive globs of information that cannot be analyzed by one or several people; they require a computer to seek the meaning in the numbers. Big Data are supposed to change everything, and indeed they have proved useful in many areas of modern life in understanding large patterns of activity. Traffic patterns, disease outbreaks, criminal behavior, and so on. But those who try to understand children and teaching and learning through Big Data have failed to produce useful insights. They have produced correlations but not revealed causation. In reading their article, I am reminded of the sense of frustration I felt when I was a member of the National Assessment Governing Board, which oversees the National Assessment of Educational Progress (NAEP). In the early years of my seven-year stint, I was excited by the data. About the fourth or fifth year, I began to be disillusioned when I realized that we got virtually the same results every time. Scores went up or down a point or two. The basic patterns were the same. We learned nothing about what was causing the patterns.

 

Sahlberg and Hasak argue on behalf of “small data,” the information about interactions and events that happen in the classroom, where learning does or does not take place:

 

We believe that it is becoming evident that big data alone won’t be able to fix education systems. Decision-makers need to gain a better understanding of what good teaching is and how it leads to better learning in schools. This is where information about details, relationships and narratives in schools become important. These are what Martin Lindstrom calls “small data”: small clues that uncover huge trends. In education, these small clues are often hidden in the invisible fabric of schools. Understanding this fabric must become a priority for improving education.

 

To be sure, there is not one right way to gather small data in education. Perhaps the most important next step is to realize the limitations of current big data-driven policies and practices. Too strong reliance on externally collected data may be misleading in policy-making. This is an example of what small data look like in practice:

 

*It reduces census-based national student assessments to the necessary minimum and transfer saved resources to enhance the quality of formative assessments in schools and teacher education on other alternative assessment methods. Evidence shows that formative and other school-based assessments are much more likely to improve quality of education than conventional standardized tests.
*It strengthens collective autonomy of schools by giving teachers more independence from bureaucracy and investing in teamwork in schools. This would enhance social capital that is proved to be critical aspects of building trust within education and enhancing student learning.
*It empowers students by involving them in assessing and reflecting their own learning and then incorporating that information into collective human judgment about teaching and learning (supported by national big data). Because there are different ways students can be smart in schools, no one way of measuring student achievement will reveal success. Students’ voices about their own growth may be those tiny clues that can uncover important trends of improving learning.