Ten years ago, when Policy Equity Group Vice President Dr. Kelly Etter wrote her dissertation about early care and education (ECE) quality, she described quality rating and improvement systems (QRIS) as a promising strategy to improve conditions for children, families, and providers. Over the years, she, along with the rest of the ECE field, has been surprised by the consistent finding across states that QRIS are not effective.

Study after study has found that differences in quality ratings don’t correspond to meaningful differences in child outcomes. As a result, providers and states have expended resources on ineffective (and often burdensome) systems and families are left in the dark about which programs would be optimal for their children.

So where did the QRIS concept go wrong?

This question was posed earlier this month at the InterAct Now: 2022 CLASS® Summit, a conference for ECE leaders. During a panel discussion, Dr. Etter unveiled a series of QRIS explainer videos that explore what went wrong and how we can do better.

These videos use a classroom favorite — Duplo® blocks — to make abstract ideas about quality more concrete and show what can be possible if we approach ECE systems-building work with the creativity and imagination of play. As any 3-year-old in the block area will tell you (or gleefully show you), sometimes you have to knock it all down to rebuild.

Can the field of ECE be as fearless with QRIS?

Video 1: How QRIS Failed the Early Learning Challenge

The number of fully operational QRIS increased from just five in 2001 to 45 in 2021. This proliferation is due in part to the federal Race to the Top-Early Learning Challenge (RTT-ELC) program, which required QRIS implementation as a condition of the grant. These ECE system-building grants were awarded to 20 states between 2012 and 2015, greatly expanding the reach of QRIS.

The RTT-ELC grants also required states to conduct validation studies to ensure QRIS was working as intended. As Dr. Etter outlines in the first video, most validation studies have found weak evidence that higher-quality ratings predict better child developmental outcomes. Indeed, a review of published QRIS validation studies found that collectively, of the 500 comparisons of child outcomes across levels of quality, only 9 percent were statistically significant.

 

Video 2: The Fault in Our Star-Ratings

The second video explains the four key reasons why QRIS don’t work

– Not centering what matters most. Teacher-child interactions are the strongest predictors of child outcomes in ECE settings. Yet, rarely are they the central focus of QRIS. Instead, most so-called quality indicators are no more than pieces of paper: a diploma, a curriculum, a policy handbook. Though these factors can support strong teacher-child interactions (and in turn, child outcomes), they don’t guarantee them.

– Assuming one “right way” to quality. QRIS tend to be prescriptive. For example, many QRIS require certain levels of education for staff. However, while some teachers benefit from higher education, others may reach the same skill-level through experience, an openness to coaching, or other lived experiences. We need to stop trying to prescribe the route to quality and instead focus on the destination.

– Collapsing across quality dimensions. Though program policies, learning environments, health and safety practices, family engagement, and teacher-child interactions are all important, they are like different colors in the quality “rainbow.” Combining all this complex information into a single data point creates a “muddy” picture of quality that hides important nuances about a program.

– Ignoring variation across classrooms. Within a child care center, children’s experiences can vary widely across classrooms. Indeed, research shows that differences in quality between classrooms in the same program are often bigger than differences between programs. Using an average or random sample of classrooms likely doesn’t adequately capture what kind of supports would most help teachers or what families can expect from the program.

 

Video 3: Three Big Ideas for Rebuilding QRIS

The good news, as explained in the third video, is that we can fix the problem. Even better, we can draw inspiration from a 3-year-old playing with blocks: We must knock down the existing QRIS structure and build a new one. Dr. Etter offers three big ideas for knocking down and rebuilding:

1. We need to dispose of the current “kitchen sink” monitoring What if instead, we centered teacher–child interactions, honored multiple valid pathways to excellence in this area, and provided educators with tailored supports to hone their craft?

2. We must move away from quality incentives that often widen inequities, such as tiered reimbursement and quality bonuses. What if states reallocated this funding to quality investments like workforce compensation and upfront funding for providers as a “down-payment” on quality improvements?

3. We should retire star-ratings, which do not provide accurate or useful information to families. What if, as in the medical profession, ECE professionals and programs could demonstrate different specializations, showcasing their strengths and providing families with nuanced information about the quality of different programs?

 

Imagine if we were as fearless as a 3-year-old.

Imagine if, instead of trying to “fix” or improve ECE programs, we honored and supported their strengths. Imagine if, instead of universally prescribing college credits or dictating the type and quantity of learning materials available to children, we focused on what each educator needs to reach their highest potential. Imagine if, instead of offering families a choice between high-quality or low-quality, we asked them, “What qualities of program matter most for you and your child?” And imagine if, by honoring multiple definitions of and pathways to quality, we could ensure a strong foundation for every child and the adults who care for them.

The failure to fundamentally reimagine QRIS is standing in the way of meaningful change.