Introduction to Data-Driven Educational Decision Making
Teachers have been using data about students to inform their instructional decision making since the early movement to formalize education in the United States. Good teachers tend to use numerous types of data and gather them from a wide variety of sources. Historically speaking, however, teachers have typically not incorporated data resulting from the administration of standardized tests (Mertler & Zachel, 2006).
In recent yearsbeginning with the adequate yearly progress requirements of No Child Left Behind (NCLB) and continuing with Race to the Top (RTTT) and the Common Core State Standards (CCSS) assessmentsusing standardized test data has become an accountability requirement. With each passing year, there seems to be an increasing level of accountability placed on school districts, administrators, and teachers. Compliance with the requirements inherent in NCLB, RTTT, and the CCSS has become a focal point for schools and districts. For example, most states now annually rate or grade the effectiveness of their respective school districts on numerous (approximately 2535) performance indicators, the vast majority of which are based on student performance on standardized tests.
As a result, the notion of data-driven decision making has steadily gained credence, and it has become crucial for classroom teachers and building-level administrators to understand how to make data-driven educational decisions.
Data-driven educational decision making refers to the process by which educators examine assessment data to identify student strengths and deficiencies and apply those findings to their practice. This process of critically examining curriculum and instructional practices relative to students actual performance on standardized tests and other assessments yields data that help teachers make more accurately informed instructional decisions (Mertler, 2007; Mertler & Zachel, 2006). Local assessmentsincluding summative assessments (classroom tests and quizzes, performance-based assessments, portfolios) and formative assessments (homework, teacher observations, student responses and reflections)are also legitimate and viable sources of student data for this process.
The Old Tools Versus the New Tools
The concept of using assessment information to make decisions about instructional practices and intervention strategies is nothing new; educators have been doing it forever. It is an integral part of being an effective educational professional. In the past, however, the sources of that assessment information were different; instructional decisions were more often based on what I refer to as the old tools of the professional educator: intuition, teaching philosophy, and personal experience. These are all valid sources of information and, taken together, constitute a sort of holistic gut instinct that has long helped guide educators instruction. This gut instinct should not be ignored. However, it shouldnt be teachers only compass when it comes to instructional decision making.
The problem with relying solely on the old tools as the basis for instructional decision making is that they do not add up to a systematic process (Mertler, 2009). For example, as educators, we often like to try out different instructional approaches and see what works. Sounds simple enough, but the trial-and-error process of choosing a strategy, applying it in the classroom, and judging how well it worked is different for every teacher. How do we decide which strategy to try, and how do we know whether it worked? The process is not very efficient or consistent and can lead to ambiguous results (and sometimes a good deal of frustration).
Trial and error does have a place in the classroom: through our various efforts and mistakes, we learn what not to do, what did not work. Even when our great-looking ideas fail in practice, we have not failed. In fact, this process is beneficial to the teaching and learning process. There is nothing wrong with trying out new ideas in the classroom. Its just that this cannot be our only way to develop strong instructional strategies.
I firmly believe that teaching can be an art form: there are some skills that just cannot be taught. I am sure that if you think back to your own education, you can recall a teacher who just got you. When you walked out of that teachers classroom, you felt inspired. Conversely, weve all had teachers who were on the opposite end of that effectiveness spectrumwho just did not get it, who were not artists in their classrooms. Even young students are able to sense that.
The concept of teaching as an art form is an important and integral part of the educational process, and I dont intend to diminish it. Rather, what I want to do is expand on it by integrating some additional ideas and strategies that build on this notion of good classroom teaching. The old tools do not seem to be enough anymore (LaFee, 2002); we must balance them with the new tools of the professional educator. These new tools, which consist mainly of standardized test and other assessment results, provide an additional source of information upon which teachers can base curricular and instructional decisions. This data-driven component facilitates a more scientific and systematic approach to the decision-making process. If we think of the old tools as the art of teaching, then the new tools are the science of teaching.
I do not think that the art of teaching and the science of teaching are mutually exclusive. Ideally, educators would practice both. In this publication, however, I focus on the data-driven science of teaching.
A Systematic Approach
Taking the data-driven approach to instructional decision making requires us to consider alternative instructional and assessment strategies in a systematic way. When we teach our students the scientific method, they learn to generate ideas, develop hypotheses, design a scientific investigation, collect data, analyze those data, draw conclusions, and then start the cycle all over again by developing new hypotheses. Likewise, educational practitioners can use the scientific method to explore and weigh our own options related to teaching and learning. This process is still trial and error, but the trial piece becomes a lot more systematic and incorporates a good deal of professional reflection (Mertler, 2009). And, like the scientific method, the decision-making process I describe in the following sections is cyclical: the data teachers gather through the process are continually used to inform subsequent instruction. The process doesnt just end with the teacher either deciding the strategy is a winner or shrugging and moving on to a new strategy that he or she hopes will work better.
A major reason teachers dont rely more on assessment data to make instructional decisions is the sheer volume of information provided on standardized test reports. One teacher comment I often hear is, There is so much information here that I dont even know where to start! One way to make the process less overwhelming is to focus your attention on a few key pieces of information from test reports and other assessment results and essentially ignore other data, which are often duplicative.
Next page