Earlier this month, we launched the second installment of our digital dialogue series called #OpportunityRedefined, which seeks to lift up amazing leaders and ideas, provoke conversation, and incite boldness in addressing the toughest challenges we face in America.
We’re excited today to publish an interview for the second installment of our #OpportunityRedefined series featuring Jack P. Shonkoff, Director of The Center on the Developing Child at Harvard University, who is committed to driving a more precise approach to how we design, measure, and segment intervention strategies in order to achieve targeted impact at scale for well-defined groups.
Let us know what you think on Twitter, LinkedIn, or Facebook using the hashtag #OpportunityRedefined.
What big social challenge are you most focused on solving?
For me, the big challenge is the fact that we haven’t achieved any major breakthroughs at scale over the past 50 years in reducing the large disparities that persist in educational achievement, health outcomes, and economic security at a population level. In fact, differences in reading test scores linked to family income have actually been increasing over the past 25 years. During this same time period, we’ve seen a variety of strategies employed to reduce gaps and increase opportunities – and high quality interventions have clearly produced positive gains for many children – but the magnitude of impact is typically modest and often difficult to sustain over time.
So while we do need to invest in quality improvement, strengthen staff training, and build more integrated systems in the early childhood space – we also need new ideas. We need new strategies that produce bigger impacts and we need a better understanding of why interventions work – or don’t work – for whom and in what contexts. In almost all program settings – early childhood education, family support, adult workforce development, etc. – the conventional approach to evaluation is focused on measuring average effects. The question we usually ask is “how effective is the intervention compared to a control group?” And we declare a program as evidence-based if the average difference between the two groups is statistically significant.
But this approach to evaluation misses more important questions – such as “what is the impact of this program on which outcomes for which kinds of children and families, and why?” We will never produce one intervention that will have life-changing impacts in all areas and for all young children facing adversity. We believe that this failure to segment the population is a big part of the reason why we’ve been unable to achieve greater impact at scale.
Tell us a bit about the Center's R&D platform. What are you doing differently at the Center on the Developing Child?
As a central feature of our Frontiers of Innovation (FOI) initiative, we are building an R&D platform to fill a niche that we feel is missing in the early childhood field. We view this effort as similar to any other R&D platform – an innovation engine that supports strategic risk-taking and expects to learn from failure as the necessary price for breakthrough outcomes. Our R&D platform operates according to different rules from the conventional approach to early childhood program development. We look to advances in neuroscience and the biology of adversity to generate new ideas and we seek greater precision in program design and evaluation. The aim of this model is to move away from the “does it work?” question and address the complementary questions of “how does it work, for whom, and why?” and “why does it not work for others?” We applaud the important role of advocacy for increased investment in best practices and we view the need for new, science-driven strategies and a segmentation approach to measuring their impacts as a critical missing piece in the early childhood landscape. In fact, it’s very likely that some existing programs are already achieving breakthrough outcomes for some kids – but those results are buried in a process that combines those successes with ineffective outcomes for others and only reports the average effects for all participants. And that’s why we’re driving a more precise approach to how we design, measure, and segment intervention strategies in order to achieve targeted impact at scale for well-defined groups.
From the standpoint of the Center’s activities, how do you work with people in the field?
We have three strategies that are driving our R&D platform. The first is to create, synthesize, and translate scientific knowledge as an engine for fresh thinking. For example, the science of how self-regulation and executive function develop across the lifespan has already been, and continues to be, a source of several new FOI strategies that are being tested right now. We are also catalyzing new scientific research. Center-affiliated scientists are diving deeper into issues related to brain plasticity and critical periods of development in early childhood, and seeking greater understanding of differences in susceptibility to adversity and responses to intervention. We’re also working on developing biological and bio-behavioral measures of toxic stress that will help us make decisions about priorities for early intervention, assess intervention effects, and prove to be affordable and feasible to implement in pediatric primary care settings, and valued by parents.
Our second strategy is to create innovation clusters that bring together a particular type of practitioner, a particular type of scientist, and a particular type of program developer who are constructively dissatisfied with the status quo and want to design and test new ideas that have not yet been proven to work. These ideas are informed by the science that’s coming out of our first strategy, the practical experience of the practitioners, and the unmet needs identified by families and communities.
The third strategy is to create a learning community that connects innovation clusters at multiple sites to share what they’re learning. This is an iterative and open process as opposed to waiting for embargoed findings to be published in journals, or waiting for new data to be presented at professional meetings. This allows us to use a common database and shared infrastructure for evaluating findings across sites.
So, to summarize, there are three core strategies guiding our R&D platform: There is a continuous infusion of new science, an ongoing proliferation of new innovation clusters that bring scientists, practitioners, and families together in a co-creation process, and a growing learning community that connects all of these different activities to learn from each other in real time – and is deeply committed to achieving far greater outcomes than existing services.
Where is the Center on the Developing Child in unleashing these three strategies that you’ve outlined?
We have an evolving infrastructure in which all three strategies are happening at the same time. These strategies are not ordered in a linear fashion and each is very much in an early phase. We are basically three years into this process and so much has changed during this short period of time. We’ve learned a lot about what’s working and what’s not. Since our model provides flexible seed funding and pilot grants to test interesting ideas, we target our investments based on what we are learning. We cut our losses on approaches that don’t look promising and we double down on interventions that do. So we are definitely past the initial planning stage and are now moving down the track, but we still have a long journey ahead of us to get to where we want to be – since we’re aiming for transformational change.
Can you provide a human anecdote to show how the interventions impact people's lives?
We are testing several new ideas in micro-trials with small numbers of children and families because it is less expensive and more nimble. One intervention we’ve been working on is a video-coaching model developed by Phil Fisher at the University of Oregon that focuses on strengthening “serve and return” interactions with young children. The idea is to videotape interactions between a parent and a child who are having problems, and then pull out the segments that show positive interactions to demonstrate to the parent what she is already doing well to support her child’s development. We started worked with mothers only and then piloted a version with fathers. After just a few preliminary trials, we found some very significant impacts on fathers who had no understanding of what serve and return interaction means. When seeing himself do this well on video, one father said, “Wow, I never understood what it was like to be the father of a baby before.” We see this simple intervention as not only beginning to help build interactional skills for fathers, but also as a strategy for bringing them into the intervention arena and to build an understanding of the different roles that fathers can play in the development of their young children.
We are also working on interventions that use rule-based games to help children build their self-regulation and executive function skills. One pilot trial is teaching parents how to play with their children in a more developmentally supportive way. Preliminary analyses found that, on average, the program showed a modest positive impact on child executive functioning but, more important, showed larger effects for some children but not for others. When we broke the data down and asked the question, “what explains the difference?” we found that it was the attention skills of the child before the program started that determined its effectiveness. Children who were able to focus better on the game playing made major advances in their mental flexibility, a key executive function skill, and children with poor attention skills did not show any improvement. So, that led the researcher-practitioner-parent team to then ask, “Well, what should we do for those children for whom the intervention didn’t work?” They then thought about an intervention to help build a child’s attention skills, which would then presumably strengthen the benefits from game-playing. Researchers are now breaking that intervention down into its component pieces so they can replicate it effectively in a targeted way.
What's next for the Center and your research?
We are trying to move our innovation clusters toward a more rigorous approach to impact evaluation focused on differential effects so we can move more quickly toward targeted impact at scale. This is a very different approach from what we were doing just a couple of years ago when we were largely on a hunt for the single, best bet. By focusing more attention on finding out what’s working for whom, we are beginning to think about the possibilities of scaling on a relatively fast track for some subgroups while we go back to the drawing board to figure out “plan B” for the children and families who did not benefit from the intervention. With this strategy to guide us, we’re building a learning community that supports flexibility but is moving toward a more systematic and rigorous approach along multiple pathways toward impact at scale.
We’re also committed to expanding our thinking beyond individual interventions for children, families, and caregivers to explore broader, community-based efforts. This is an area where we are currently engaged with some interesting work on the ground as well as in some early conversations with individuals and groups who are working on large community development models but do not have an explicit theory of change about how that effort will actually affect children’s health and learning. Large, community-based interventions and strategies are clearly an underdeveloped part of our portfolio, and we’re really interested in thinking about how to build that up.