“Using data to drive instruction” can be as difficult as determining who on your teaching staff is an “innovative educator.” Educational leaders understand the basic definition of each term, but when trying to clarify what is entailed in their everyday classroom practice, the definitions become slippery and harder to articulate. Every teacher believes they use data to drive instruction, but the real question is what data are they using? Are they using classroom data, school-wide data, district-wide data, state or nation-wide data. Who should make the decisions about which data to use?
I believe this knowledge gap can be conquered by creating simulations and trainings on the use of data in education. In order to do this, teachers must explicitly articulate hidden assumptions that they are reluctant to voice. A classic assumption in public schooling is that students need to be present in order to learn. While competency-based learning models are challenging this assumption, most school funding is predicated on average daily attendance. Therefore, educational leaders focus on improving attendance as an essential element in improving student achievement. This may not be an accurate assumption. The increase of blended learning calls into question the reliability of the Carnegie unit as “seat time” in a traditional brick and mortar school becomes increasingly irrelevant for self-motivated digital learners.
Creating a School-wide Data Simulation
Boudett et al (2005) suggest creating a graphical data overview and sharing this data with staff. This creates an inquiry process as educators endeavor to individually and collectively interpret graphs, tables and statistics. Examining the graph of school-wide GPA data above, reveals that grades at this school are distributed along a curve of normal distribution. This suggests that the instructional program is relatively sound. If there were a high number of 4.0 students or a high number of failing students, that might suggest that grades aren’t standards-based.
A deeper analysis in the above figure shows exactly how many students need to improve and by how much. The school can use this information to develop a better understanding of which students need basic skills intervention and which students need additional motivation in completing school work.
After analyzing grades given by each teacher in a school, leadership can conduct a grade to attendance correlation study. This data, whether it is either by class or by school-wide GPA, can offer up powerful student achievement information and get staff to question how can a student missing 20 days of school still have a 3.5 GPA? Or even worse, how can students attend school every day and still fail almost every class? Are the students who miss 80 percent of the school year doing so because they are on the path to dropping out, or are they homeless or caring for a terminally ill relative. Numbers have power, but we have to remember that our students are individuals. Sometimes this type of analysis lets us start a conversation that may be crucial in reaching a disaffected student.
A school should examine whether or not attendance is a predictor of a student’s grade point average (GPA). The above figure shows the relationship of attendance rate on GPA for an N=259 student sample. This correlation rate was .371, which is a weak relationship, suggesting that school attendance has a small effect on a student’s academic achievement. Since several educational researchers (Bridgeland, DiJulio, & Morison, 2006; Fisher, Frey, & Lapp, 2011) have suggested that attendance has a direct correlation on student achievement, this instructional program may be inconsistent in measuring student achievement.
As teachers struggle to comprehend this data, it may be beneficial to zoom in on one classroom’s attendance/grade correlation. The graph above shows an individual classroom with grades correlated with attendance rate. There are only 44 dots on the graph instead of 259, so the relationship is easier to spot. There are also fewer outliers, making these students easier to identify and provide intervention for.
James-Ward et al (2013) suggested asking broad questions to participants in data analysis like: What do we know from the data from our last school year? How does this information compare to prior years? Next, the participants can generate more specific questions that can be discussed in break out sessions. For example: Why did the 10th graders have the lowest ELA scores? What changed that increased our science scores so dramatically? How can we increase our attendance rate? Discussion on these questions can be used to create more specific goals and objectives for individual subjects and departments. A data team’s goal is to find changes in an instructional program, consider what caused them, then develop an action plan to improve instruction, implement it, and monitor the results.
Simulations can provide powerful epiphanies about the need to build a school-wide culture in using data to drive instruction. If only one or two teachers on a campus have strong relationships between attendance and grades, this may suggest that the school does not have a meaningful picture of its actual student achievement. If most of the school’s teachers have a strong correlation with attendance and grades, then perhaps their instructional program is more accurate in predicting actual student achievement.
Boudett, K., & Steele, J. (2007). Data wise in action: Stories of schools using data to improve teaching and learning. Cambridge, MA: Harvard Education Press.
James-Ward, C., Fisher, D., Frey, N., & Lapp, D. (2013). Using data to focus instructional improvement. Alexandria, VA: ASCD.