College Completion Rates

For those of you looking for a little light reading, the 75 page Education Longitudinal Study of 2002 (ELS:2002): A First Look at the Postsecondary Transcripts of 2002 High School Sophomores might be just the trick. This study from the National Center for Education Statistics examined the postsecondary transcripts of 11,522 high school sophomores going back to 2002. This large sample of US high school students provides researchers a valuable source of data to illuminate the factors that influence transitions from high school to college, as well as success and failure rates throughout the postsecondary education system.

2002 Coll Compl Rates

Eighty-four percent of spring 2002 high school sophomores had at least some postsecondary enrollment as of the 2012–13 academic year. Of that 84 percent, 8 percent earned a master’s degree or higher, 33 percent earned a bachelor’s degree, 10 percent earned an associate’s degree, and 7 percent earned an undergraduate certificate, which included certificates in administrative support, computer programming, cosmetology, and medical records.

The overall undergraduate grade point average (GPA) from this sample was 2.65. The figure dropped to 1.99 among those who did not attend a 4-year institution and did not earn a postsecondary credential, and rose to 3.16 for those who earned a bachelor’s degree or higher.

Only one percent of those whose 10th-grade reading assessment score was in the lowest quartile attained a master’s degree or higher. A total of 13 percent of these students attained a bachelor’s degree. By comparison, 17 percent of those whose 10th-grade reading assessment score was in the highest quartile had attained a master’s degree or higher, and an additional 47 percent attained a bachelor’s degrees.

2002 Coll Enroll

Those who attended a 4-year institution earned, on average, 86 percent of the undergraduate credits they attempted; those who did not attend a 4-year institution earned, on average, 68 percent of the undergraduate credits they attempted. Other reports paint a troubling picture of recent efforts to bring college completion to all. For instance, the National Journal reported that 4 out of every 10 Californians are Latino, but only 12 percent earn a bachelor’s degree. Compound this with the fact that 75% of LAUSD 10th grade students are not demonstrating college readiness by scoring a C or better in A-G required classes and we have a more nuanced view of student achievement.

We know that GPA is a strong predictor of college success, but this NCES study shows how dismal the college graduation statistics really are for students who are unprepared. Only 4.9 percent of C- and D students (GPA less than 2.0) earned a bachelor’s degree by their mid-20s. For those with a solid C average (2.0 to 2.49), 14.8 percent earned a bachelor’s. That number rose to 28.2 percent for C+ (2.5 to 2.99 GPA) students. By contrast 65 percent of B students and 81 percent of those with a 3.5 GPA or higher earned at least a bachelor’s degree. Ten years after high school, only 41 percent of the sophomores from the class of 2002 have earned a Bachelor’s Degree. What do you think these numbers say about a college for all culture?

Sources

Lauff, E., and Ingels, S.J. (2015). Education Longitudinal Study of 2002 (ELS:2002): A First Look at the Postsecondary Transcripts of 2002 High School Sophomores (NCES 2015-034). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC. https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2015034

Poor Results from Teacher PD

AIR Report

A new report from AIR informs that after 13 years of significant federal investment totalling more than $30 billion, teacher Professional Development (PD) has shown mostly disappointing effects on teacher practice and student achievement. Birman (2009) conducted an analysis of more than 7,000 teachers and found that U.S. teachers have been receiving professional development that is superficial, short-lived, and incoherent.

Only 13 percent of elementary teachers reported receiving more than 24 hours a year of in-depth training teaching reading. Only 6 percent of elementary teachers participated in more than 24 hours of in-depth study of teaching mathematics.„ Only one in five elementary teachers reported participating in professional development in which they practiced what they learned and received feedback.„ Only 17 percent of elementary teachers reported participating in professional development that was explicitly based on what they had learned in earlier professional development sessions.

Gates

According to a 2014 Bill & Melinda Gates Foundation report fewer than three in 10 teachers (29 percent) are highly satisfied with their professional development, and only 34 percent say that PD is getting better. Research suggests educators perform better when they acquire the right knowledge and the right skills and have a chance to practice these new learnings, study the effects, and adjust accordingly.

In 2013–14, for example, the average U.S. teacher received just about $251 worth of Title II–funded professional development and each principal received roughly $856. How should Congress revise this law so that a smarter allocation of the funds occurs? How should educational leaders match the right improvement activities to the right resources to the right educators? Please describe your best teacher professional development experience in the comments section.

The graphic below illustrates the benefits of collaboration. Unfortunately, only 7 percent of teachers report working in schools with effective collaboration models.

Collaboration

CAASPP HSS Meeting

I was invited to participate in a stakeholder meeting on the California Assessment of Student Performance and Progress (CAASPP). The purpose of the meeting is to gather input from stakeholders regarding a new History-Social Science Content Assessment. The new assessment will be aligned with the Common Core State Standards. The meeting is hosted by the California Department of Education (CDE) and Educational Testing Service (ETS).

California should move toward an accountability system with local designs in order to have schools prepare students for regional colleges and work sectors. Schools and districts can use state funds to create new assessments for college and career readiness. Testing vendors need to be more inclusive of local needs, or else the opt-out movement in standardized testing will increase. California’s Local Control Accountability Plan (LCAP) requires districts to develop, adopt, and annually update, with parent and community involvement, a three-year accountability plan that identifies goals and measures progress across multiple indicators of both opportunities and outcomes. Local districts can add their own indicators to those that are state required. SBAC

Research shows that engaging teachers in jointly scoring student work produces greater success, builds professional norms, and increases content knowledge. Further, engaging students in peer review requires them to compare their work to the standards and culminates in powerful learning experiences. The next generation assessments should incorporate both of these factors, but additional improvements in testing may be driven through the use of artificial intelligence, computer adaptive testing, and what are colloquially known as “robo-graders” to provide instant feedback to students during the testing process.

Now that California assessments are moving beyond paper and pencil, the technology behind testing systems needs to be updated also. This year, my free fantasy football league added a new feature. Each week, they use artificial intelligence to run if/then calculations on your individual team. If you had played Tom Brady instead of Peyton Manning, you would have come in first place instead of fifth. The computer reports on four or five of your key players and you kick yourself for the rest of the week. Spotlight-Parent-Teacher-ConferenceHonestly, I was blown away by how light, conversational, and even funny the writing in this narrative was, because I knew it was all computer-generated. It was very user-friendly, like the products that Spotlight Education develop.

Regardless of what the assessment experience is like, it’s going to produce some form of data, and that data should be understandable and actionable by families and educators. After pondering this, I started wondering why our educational leaders aren’t using this technology to make testing more interactive and interesting? Aren’t they talking to anyone in Silicon Valley? Don’t they realize that gamification offers powerful assessment technologies and engages students in deeper learning activities?  If these new tests are going to be worth the effort, shouldn’t we make them cutting-edge and restore California’s reputation as an innovator?

The next-generation assessments in History/Social Studies should:

  1. Give students some choice about what they feel they know the most about (e.g., The Gold Rush, WWI, The Vietnam War, and etc).
  2. Provide a library of documents and a menu of different performance tasks like the CWRA or New Hampshire’s PACE system.
  3. Measure how well students respond to feedback from a computer adaptive engine.

For example, students construct a response arguing whether the eugenics movement was positive or negative for mankind. The computer “reads” their response and gives them a revision list. Then, the students improve their writing and resubmit. A teacher, or subject area expert may still be needed to assess the level of content knowledge in the students answer, but we are nearing the time when computers can assess a student’s historical reasoning. This type of test would not only measure what the student knows, but what the student can do with what they know. This is similar to what happens in the workplace, when a piece of writing has to incorporate the diverse opinions of a team or committee. Computer adaptive tests

Critics of this approach will demand evidence that students have learned all the standards. Surely, measurement experts can design an approach that samples different populations in schools and gives them different standards, otherwise we will get new tests that are no different from the 60-80 question fill in the bubble tests that provide 2-4 questions per standard and cause testing fatigue in our students. Allowing students to select a portion of the content will create more engagement in the testing process. If technology can provide high-quality analysis and advice to fantasy football players for free, why can’t ETS provide similar rich and rewarding experiences when testing California students? We need tests that teach students to reflect on their choices and make better decisions. Further, these tests need to be engaging (and dare I say fun?) enough so that students look forward to the challenge instead of dreading it. We have the technology. We have the know-how. We just need the consensus. Please get involved with your local schools and provide input on their LCAP proposals. Demand more than a fill in the bubble testing experience for your child.

Sources

Darling-Hammond, L., Wilhoit, G., & Pittenger, L. (2013) Accountability for College and Career Readiness: Developing a New Paradigm. Standford Center for Opportunity Policy in Education. Palo Alto, CA. Accessed at http://dx.doi.org/10.14507/epaa.v22n86.2014 on March 15, 2015.

Leather, P.K. and Barry, V. M. (2014) New Hampshire Performance Assessment Of Competency Education: An Accountability Pilot Proposal To The United States Department Of Education. November 21, 2014. Concord, NH. Accessed at http://www.education.nh.gov/assessment-systems/pace.htm on March 12, 2015.

Smarter Balanced (2014) Smarter Balanced Pilot Automated Scoring Research Studies. http://www.smarterbalanced.org/wordpress/wp-content/uploads/2014/09/Pilot-Test-Automated-Scoring-Research-Studies.pdf

Virtual Schools Perform Poorly

For some time, I have been wrestling with the problems occurring in virtual schools throughout our country. As  a tech enthusiast, I believe educational technology has the potential to transform public education. However, the wrong students are being recruited into virtual schools and because almost all virtual schools are charter schools being run by for-profit Educational Management Organizations (EMOs) they are trying to maximize their dollars instead of improve their educational delivery model. Researcher Michael Barbour ( )thinks competition and market forces in the education system have created a separate, but equal education systems. While Adam Smith championed  free markets in the private sector, we have seen that public school closures devastate and devalue communities (Hello Chicago!). Thus, this post is a collection of tweets that will be sent to California legislators who oversee K12, Inc’s California Virtual Academies (CAVA) virtual schools. In the hopes that they remember their responsibility is to California school children, not out of state corporations. Feel free to blog, reblog, tweet, retweet, and rock on in the search for truth, justice, and the American way.

Computer Dimploma

California Virtual Academies #CAVA is the largest provider of virtual public education in CA. CAVA uses eleven locations to employ 766 teachers who work from home and educate students online. Darling-Hammond, et al (2014) found that the promise of ed tech has failed to meet the high expectations policymakers have heaped on the sector, however, there have been many successes that reveal promising approaches for technology implementation. Pollock et al, (2014) maintains that high-quality teacher assistance “seems to be mandatory for the online learning of underprivileged students.”

In 2011-12, the most recent data available, CAVA paid teachers an enrollment-weighted, system-wide average of $36,000 a year, while teachers at CAVA’s authorizing districts made an average of $60,000 a year. Because they only pay a fraction of what corresponding districts pay, CAVA teachers report high rates of teacher turnover. In 2012-13, CAVA received $95M in public funding, $47M went to K12 HQ in Virginia. In the last four years, CAVA’s overall graduation rate was 36%, compared to 78% for the state of CA. In 2012-13, 57% of schools with similar student populations performed better than CAVA and 71% of all schools in the state performed better than CAVA. CAVA’s statewide rank was a 2.9 out of 10. Some teachers spend 65 hours per week just completing administrative tasks. In 2012, K12 spent $1 million on Nickelodeon and Cartoon Network advertisements, and $600,000 on teen social media sites. That year the company’s ad spending topped $20 million.

Using the California Department of Education’s definition of “continuous enrollment,” CAVA was found to have a 2012-13 student turnover rate of 24%, compared to 7% in California. CAVA’s model of virtual education negatively impacts CA kids. Virtual Schools should not look like this.

CAVA’s head of school has issued this response: Response to In The Public Interest Report from California Virtual Academies, by Katrina Abston.

Sources

https://edpolicy.stanford.edu/sites/default/files/scope-pub-using-technology-report.pdf 

Pollock, M., et al. (2014). Innovating toward equity with online courses: Testing the optimal blend of in-person human supports with low-income you and teachers in California. The Center for Research on Educational Equity. University of California San Diego. La Jolla, CA. Accessed at http://create.ucsd.edu/research/CREATE%20Equity%20RR_1Mar2014.pdf

http://www.inthepublicinterest.org/article/virtual-public-education-california-study-student-performance-management-practices-and-overs

http://nepc.colorado.edu/publication/virtual-schools-annual-2015

www.labornotes.org/2015/01/virtual-teaching-real-organizing

If you feel inclined to contact your California Legislators about this issue, I am providing their Twitter handles below:

Senate Education Committee

@SenatorCarolLiu

@bobhuff99

@MartyBlock39

@SenatorLeyva

@MrTonyMendoza

@DrPanMD

@SenAndyVidak

Assembly Committee on Education

Legislative Office Building, 1020 N Street, Room 159 Sacramento, California 95814 Phone number (916) 319-2087.

Committee members Twitter handles are:

@ODonnellUpdate

@AsmRocky

@YKAssembly

@KMcCartyAD7

@TonyThurmond

@AsmShirleyWeber

SITE Presentation Materials

SITE Logo I have been studying teacher innovation for the last five years. My research examines the confluence of teacher entrepreneurial orientation, blended learning, and online teacher professional development. What I have found is that these areas are converging in the so-called “MOOC-osphere.” This means there are great opportunities for leveraging and scaling MOOCs as assets in teacher professional development programs. We know from research (Barnett, 2002; Borko, 2004; Darling-Hammond et al, 2009; and Killeen, Monk, & Plecki, 2002) that teachers often view professional development as ineffective. Most PDs do not provide ongoing support for implementing new strategies or tools. MOOCs offer a scalable way to train staff anytime, anywhere and in very large groups. This approach produces robust data sets that illustrate which learning activities are effective and which are not. This data can be analyzed to fine-tune the variety of trainings essential for rolling out comprehensive curricula implementations, blended learning initiatives, and 1:1 programs. EO Dimensions The entrepreneurial orientation (EO) construct has been studied for 40 years and these studies have been published in 256 scholarly journals. Although primarily used in Management research, the construct has been successfully adapted and validated as a scale for measuring teachers and administrators along domains of innovativeness, proactiveness, and risk-taking. This work provides precise definitions for each domain as well as a baseline for comparing teachers who seek out PD opportunities online to those who do not. Dede et al (2005) reviewed 400 articles about online, face-to-face, and hybrid teacher PD programs and found 40 represented high-quality empirical research. They developed five areas for examining best practices (a) Design, (b) Effectiveness, (c) Technology, (d) Communication, and (e) Methods. These focus areas may provide a framework for evaluating MOOCs PD assets. As a final takeaway, I would like to clarify that I am NOT suggesting that we do away with all other forms of PD, however, Districts should be supplementing their professional development programs with MOOCs and using that data to drive their follow-up offerings. While for-profit corporations proliferate, marketing online education programs with dubious success rates, perhaps the smart play is to market MOOCs to people who want to be life-long learners, improve their technical skills, and increase their pedagogical moves.  These people are already in your buildings. They are your teachers.

References

Dede, C., Breit, L., Jass-Ketelhut, D., McCloskey, E., and Whitehouse, P. (2005). An overview of current findings from empirical research on online teacher professional development. Harvard Graduate School of Education. Cambridge, MA. November, 2005. Accessed at http://www.gse.harvard.edu/~uk/otpd/final_research_overview.pdf

Petri, S. M. (2013). Where are the risk takers? Using the entrepreneurial orientation construct to identify innovative and proactive teachers (Doctoral dissertation, California State University, Northridge). http://scholarworks.csun.edu/handle/10211.2/4464

Creating School Data Simulations

“Using data to drive instruction” can be as difficult as determining who on your teaching staff is an “innovative educator.” Educational leaders understand the basic definition of each term, but when trying to clarify what is entailed in their everyday classroom practice, the definitions become slippery and harder to articulate. Every teacher believes they use data to drive instruction, but the real question is what data are they using? Are they using classroom data, school-wide data, district-wide data, state or nation-wide data. Who should make the decisions about which data to use?

I believe this knowledge gap can be conquered by creating simulations and trainings on the use of data in education. In order to do this, teachers must explicitly articulate hidden assumptions that they are reluctant to voice. A classic assumption in public schooling is that students need to be present in order to learn. While competency-based learning models are challenging this assumption, most school funding is predicated on average daily attendance. Therefore, educational leaders focus on improving attendance as an essential element in improving student achievement. This may not be an accurate assumption. The increase of blended learning calls into question the reliability of the Carnegie unit as “seat time” in a traditional brick and mortar school becomes increasingly irrelevant for self-motivated digital learners.

Creating a School-wide Data Simulation

GPA Data

Boudett et al (2005) suggest creating a graphical data overview and sharing this data with staff. This creates an inquiry process as educators endeavor to individually and collectively interpret graphs, tables and statistics. Examining the graph of school-wide GPA data above, reveals that grades at this school are distributed along a curve of normal distribution. This suggests that the instructional program is relatively sound. If there were a high number of 4.0 students or a high number of failing students, that might suggest that grades aren’t standards-based.

Students by GPA

A deeper analysis in the above figure shows exactly how many students need to improve and by how much. The school can use this information to develop a better understanding of which students need basic skills intervention and which students need additional motivation in completing school work.

Grade-Abs Correlation

After analyzing grades given by each teacher in a school, leadership can conduct a grade to attendance correlation study. This data, whether it is either by class or by school-wide GPA, can offer up powerful student achievement information and get staff to question how can a student missing 20 days of school still have a 3.5 GPA? Or even worse, how can students attend school every day and still fail almost every class? Are the students who miss 80 percent of the school year doing so because they are on the path to dropping out, or are they homeless or caring for a terminally ill relative. Numbers have power, but we have to remember that our students are individuals. Sometimes this type of analysis lets us start a conversation that may be crucial in reaching a disaffected student.

Grade Level Attend

A school should examine whether or not attendance is a predictor of a student’s grade point average (GPA). The above figure shows the relationship of attendance rate on GPA for an N=259 student sample. This correlation rate was .371, which is a weak relationship, suggesting that school attendance has a small effect on a student’s academic achievement. Since several educational researchers (Bridgeland, DiJulio, & Morison, 2006; Fisher, Frey, & Lapp, 2011) have suggested that attendance has a direct correlation on student achievement, this instructional program may be inconsistent in measuring student achievement.

Class Level Attend

As teachers struggle to comprehend this data, it may be beneficial to zoom in on one classroom’s attendance/grade correlation. The graph above shows an individual classroom with grades correlated with attendance rate. There are only 44 dots on the graph instead of 259, so the relationship is easier to spot. There are also fewer outliers, making these students easier to identify and provide intervention for.

James-Ward et al (2013) suggested asking broad questions to participants in data analysis like: What do we know from the data from our last school year? How does this information compare to prior years? Next, the participants can generate more specific questions that can be discussed in break out sessions. For example: Why did the 10th graders have the lowest ELA scores? What changed that increased our science scores so dramatically? How can we increase our attendance rate? Discussion on these questions can be used to create more specific goals and objectives for individual subjects and departments. A data team’s goal is to find changes in an instructional program, consider what caused them, then develop an action plan to improve instruction, implement it, and monitor the results.

Simulations can provide powerful epiphanies about the need to build a school-wide culture in using data to drive instruction. If only one or two teachers on a campus have strong relationships between attendance and grades, this may suggest that the school does not have a meaningful picture of its actual student achievement. If most of the school’s teachers have a strong correlation with attendance and grades, then perhaps their instructional program is more accurate in predicting actual student achievement.

References

Boudett, K., & Steele, J. (2007). Data wise in action: Stories of schools using data to improve teaching and learning. Cambridge, MA: Harvard Education Press.

James-Ward, C., Fisher, D., Frey, N., & Lapp, D. (2013). Using data to focus instructional improvement. Alexandria, VA: ASCD.

Stop Collaborate Listen

collaborate-and-listen

Data-driven decision making (DDDM) is defined as “the process by which an individual collects, examines, and interprets empirical evidence for the purpose of making a decision” (Mandinach & Jackson, 2012, p. 27). This definition can easily be broadened to encompass the collective efforts of a group of educators at one or more school sites. Mandinach & Honey (2008) suggested the use of student achievement and other data such as “attendance, course-taking patterns and grades, and demographic data” to drive school improvement at the school, district and state levels. Unfortunately, many school leaders across the nation are unsure about how to transform mountains of data on student achievement into an action plan that will improve instruction and increase student learning. Using a software product like Spotlight Education may help educators in gaining a better understanding test scores and growth models, as well as how to make and measure goals in their everyday classroom practices.

In order to deliver an authentic simulation that will demonstrate how important it is to build a data-driven culture at a school site, educational leaders first need to inspire teachers to collaborate and work together. Daniel Pink suggests changing this conversation from How to Why. When educators buy into and understand the Why they will figure out the How. Pink further defines a new set of ABC skills school leaders can use to motivate their teachers: Attunement – understanding someone else’s perspective; Buoyancy – remaining afloat in an ocean of rejection; and Clarity – curating, distilling, making sense of information. You can see these concepts illustrated in the video below.

Patrick Lencioni’s book The Five Dysfunctions of a Team lays out five core principles dedicated to the vision of improving teamwork in an organization. 1) Teams have to begin by building trust in each other and the leader. 2) Then you must have a team that can discuss conflict openly. 3) Everyone needs to be fully committed. 4) Everyone needs to accept accountability for their role. 5) Finally, there has to be an attention to getting results. Lencioni describes the power of teamwork as essential to any organization:

When it comes to helping people find fulfillment in their work, there is nothing more important than teamwork. It gives people a sense of connection and belonging, which ultimately makes them better parents, siblings, friends and neighbors. And so building better teams at work can – and usually does have an impact that goes far beyond the walls of your office or cubicle (pp. 4-5).

Lencioni acknowledges that teams may have a figurative leader, but members of a team each contribute to the success of the organization. In essence, they distribute or share power with each other. An effective educational leader can co-opt Lencioni’s techniques and create powerful simulations that give teachers opportunities to analyze data collectively. Collaborative groups harness the synergy of multiple abilities. An expert statistician may have trouble presenting data in an understandable way, but a middle school English teacher may excel at chunking complex concepts. Lastly, a group of people struggling to understand something that is important to their institution will bond over this endeavor and they will “own” the results of their labor. Meaning, once a team makes sense of their data, they will generate causal theories and hypotheses, which they can test and tweak and fine-tune to improve their school’s results.

References

Lencioni, P. (2002) The five dysfunctions of a team: A leadership fable. New York, NY. Jossey-Bass.

Mandinach, E., & Jackson, S. (2012). Transforming teaching and learning through data-driven decision making. Thousand Oaks: Corwin Press.

Pink, D.H. (2011). Drive: The surprising truth about what motivates us. Penguin Books.

Spotlight Education Makes DDDM Easy

Many scholars believe that teaching and learning has not improved nationally because teachers as a group have not learned how to use data effectively to improve student learning. This is chiefly a leadership problem, because many school principals lack the necessary skills to make decisions-based on data. Additional research has revealed that teachers also lack the clarity and tools to understand how their students, schools, districts and states are performing. Thus, it is clear that extensive training needs to take place so that educators and educational leaders can develop the skills to work with data and grow comfortable using data tools that increase student achievement.

Educational leaders need to make data more accessible and comprehensible for a workforce that has most likely avoided challenging coursework in statistics and mathematics. Getting these educators to embrace data-driven strategies may be a formidable challenge. For decades, businesses and government agencies have used cultural simulations to prepare executives for foreign travel, implement diversity programs, and ensure diplomacy at home and abroad. Few educational leaders have used these cultural simulations. Most will be surprised by the visceral epiphany they deliver. If similar simulations can be developed to ensure that all teachers become data-driven educational leaders, the U.S. education system could be poised to make a great leap forward.

Julia-Smith-with-Bubbles-e1418422091627

Gary Shirts, creator of the culture shock simulation BaFa’ BaFa’, outlines ten steps to successful simulations: 1) don’t confuse replication with simulation; 2) choose the right subject to simulate; 3) develop a design plan; 4) design the simulation, so trainees take responsibility for their actions; 5) use symbols to deal with emotionally charged ideas; 6) don’t play games with trainees; 7) use non-trainees to add realism; 8) develop an appropriate performance assessment model; 9) alpha test your simulation in low-risk circumstances; and 10) set your own standards for success.

Most teachers are not trained as quantitative statisticians and requiring them to work within the realm of data is analogous to requiring them to learn a foreign language. Simulations can ease the anxiety of this process by providing safe places for teachers to increase their understanding of student achievement data and how to systemically collect and analyze that data for continuous instructional improvement. When teachers gain data literacy skills and improve their understanding, the data analysis tasks can become more sophisticated. Solving a problem for someone else is often less worrisome than bringing up your own problem areas and having others comment on them. As teachers become more confident about their analysis and use of data, educational leaders will see teachers displaying class achievement data and clearly showing students where they have improved.

Spotlight Education provides a promising software service that simplifies educational data.  Instead of extensive lists of statistics with difficult-to-read tables and charts, their service creates easy-to-read, narratives in either written or video form. These reports offer insightful analyses of education data. They customize the reports for each stakeholder: teachers, students, parents, principals, or superintendents.

References

Boudett, K., & Steele, J. (2007). Data wise in action: Stories of schools using data to improve teaching and learning. Cambridge, MA: Harvard Education Press.

Boudett, K., City, A., & Murnane, R. (2005). Data wise: A step-by-step guide to using assessment results to improve teaching and learning. Cambridge, MA: Harvard Education Press.

Corrigan, M., Grove, D., & Vincent, P. (2011). Multi-dimensional education: A common sense approach to data-driven thinking. Thousand Oaks, Calif.: Corwin Press.

Mandinach, E., & Jackson, S. (2012). Transforming teaching and learning through data-driven decision making. Thousand Oaks: Corwin Press.

Mandinach, E., & Honey, M. (2008). Data-driven school improvement: Linking data and learning. New York: Teachers College Press.

Shirts, G. R. (1992). Ten secrets of successful simulations. Training, 29(10), 79-83.