This article originally appeared on the Brookings blog.
With 258 million children out of school and 617 million children and adolescents in school but not achieving minimum proficiency levels in reading and math, we are rapidly running out of time to achieve Sustainable Development Goal (SDG) 4 by 2030: ensuring quality education and lifelong learning opportunities for all children.
The global education sector faces the dual challenge of improving learning at scale and measuring whether we’re on track. To address the scope of the learning crisis, we must identify innovations that have the potential to improve learning in a manner that is sustainable, equitable, and cost-effective, and then understand how to scale these innovations in a given context.
For several reasons, data is a crucial piece of the puzzle. First, data can provide insights on program effectiveness, allowing governments and donors to identify, finance, and scale the most effective interventions. However, data is not just about final project outcomes: Collecting information about implementation in real-time, and creating feedback loops to decisionmakers, allows for adaptation and improvement. In the global education sector, we tend to spend resources on trying to perfect our innovations before they’re carefully evaluated, which can mean that the most powerful insights come far too late. Instead, stakeholders need to use data and insights to iterate and pivot quickly—highlighting the importance of collecting the right kinds of data.
HOW WE ARE WORKING WITH DATA TO IMPROVE TEACHING AND LEARNING
We at the Center for Universal Education (CUE) and STiR Education heavily rely on data to improve learning at scale and measure progress in our areas of research. While data and information systems are central to informing our work, we employ them in distinct ways. Below we share our unique approaches to promoting quality education for all through data collection and use.
STiR: Using “thick” and “thin” data to motivate education officials and teachers
STiR Education partners with governments to reignite intrinsic motivation and lifelong learning among government officials, teachers, and students, impacting more than 200,000 teachers and 6 million children in India and Uganda.
STiR implements a radically different approach to monitoring, evaluation, and learning using a combination of “thin” and “thick” data. By thin data, we mean observable outputs and outcomes, whereas thick data refers to the description of processes. We focus not just on collecting outcome data for the sake of accountability (e.g., are teachers showing up to class or are students passing their exams?) but also on stimulating discussion among officials and teachers using thick data, which motivates them to further drive behavior change.
Thick data could include feedback that an official provides to a teacher, the consequent change in teaching practice, and the extent that students trust their teacher and are engaged in learning. This type of data tends to be more motivating for officials and teachers compared to more traditional thin outcome measures for two reasons. First, it helps them to better understand progress toward improving common outcome indicators—such as whether students are passing their exams—by showing the underlying steps toward this (e.g., do students feel safe in the classroom and are they engaged in learning?). Second, it tends to be more actionable, so that teachers and officials understand what they could do differently in their classrooms and/or district offices to improve lifelong learning.
By measuring both thin and thick data rapidly at large scale, STiR aims to equip education officials and teachers with the type of information that simultaneously tells us whether progress is being made toward intended behavior change and how to improve based on these insights. Currently, STiR is building a data system based around a mobile app that will share both thick and thin data with teachers and officials on a monthly basis to provide actionable, real-time guidance on improving student learning.
CUE: Using real-time data for scaling and achieving education outcomes
CUE pursues two different lines of work uniquely concentrated on data. The Millions Learning project focuses on how to scale and sustain quality education initiatives, especially for marginalized children and young people. The Real-time Scaling Labs—a collaborative effort with local institutions and governments in a number of low- and middle-income countries—seeks to generate evidence and provide practical recommendations around the process of scaling. Using real-time data to guide the scaling process, CUE and partners are putting the important principles of iteration and adaptation into practice.
CUE also focuses on the collection, analysis, and use of data to achieve outcomes in early childhood development and education—centered around four key types of data: real-time performance, results, cost of service delivery, and cost of inaction. We are particularly interested in how technology can help gather and analyze real-time performance data.
This focus emerged from several years of work studying outcome-based financing, and in particular, impact bonds. To participate in outcome-based financing structures, service providers need accurate real-time performance data to effectively adapt performance and meet targets. While interest in improving performance management has grown over the last few years, so too has the availability of technology. This has resulted in a wide variety of new technological tools and platforms for collecting, analyzing, and integrating data into education. CUE is investigating the landscape of existing technological tools for data collection and use in education, with the goal of providing decisionmakers with robust evidence and practical guidance about the use of data in achieving education outcomes.
STiR’s and CUE’s complementary work highlight the central role of data for achieving SDG 4, the potential for technology to facilitate data collection and analysis, and the use of data to support scaling. While we use data in distinct ways, our work showcases the need to think differently about monitoring, evaluation, and learning, and to integrate real-time data into decisionmaking.
By Rein Terwindt, Emily Gustafsson-Wright and Jenny Perlman Robinson