Stories of Service

Search

By: Shayla Thiel Stern

Tags

How Research Helps Students Become Better Readers

children

Editor’s Note: This article is the first in a series, “Using Evidence for Improvement,” which looks at how ServeMinnesota’s Center for Advancing Research to Practice gathers and analyzes data in order to constantly make its programs better for participants.

Peter Nelson, Ph.D., is the Director of Research and Innovation at ServeMinnesota, where he engages with our portfolio of programs to ensure key principles of effective implementation and evidence-building occur. In this interview, we discussed how ServeMinnesota’s team was able to implement a new assessment strategy to help improve learning outcomes for students in Reading Corps.

Using Assessment to Understand a Problem

Peter Nelson
Peter Nelson, Ph.D.

ServeMinnesota: Reading Corps is a program that has been around for 15+ years in order to help younger elementary students read at grade level. The program’s tutors use different strategies, or interventions, to help the students. How do we assess how those students are doing?

Peter Nelson: We use brief measures of students’ literacy skill to get a sense of how they are doing across time. They practice during each session five days a week, but also complete the brief assessments once each week. Each score gets plotted on a graph. We compare those scores to what we call an aim line, which is line drawn from where they start in the fall to a benchmark in the spring linked to future college readiness and mastering proficiency.

When 3 of the last 5 weekly scores are above that aim line, and 2 of those scores are above the next benchmark, students are exited from the program. For example, if you exit in March, you need 3 data points above your goal line, two of which must be above a future spring goal. It’s a rigorous way to exit kids from receiving the extra support that Reading Corps provides.

We want to be confident that when we remove Reading Corps support, the student will stay on track and be successful.

What was the initial problem you were interested in?

For a few years, we knew there were kids who exited from Reading Corps but weren’t staying on track. You can look at the probability in the paper—we were seeing about 34% of students falling off track after an exit decision. To be clear, we were seeing that kids were much better off than they were previously, but many kids just weren’t maintaining a level of performance we’d hope for once Reading Corps support was discontinued.  

So even though about 66% of kids were maintaining a great growth trajectory, we care a lot about the 34% that weren’t.

Weighing Difference Approaches

How did you figure out why this was happening?

We looked at a lot of factors to understand why we were seeing that drop off in performance after kids left Reading Corps.

We wondered, for example, if the point in time during the school year that kids exited the program impacted their long term growth. That didn’t explain it. We then looked at demographics of kids – race, gender and so on – and that wasn’t explaining much either.

Eventually, we started thinking less about predicting the drop off and more about whether changes to the decision guidelines would be useful. For example, we spent time thinking about, should we change our criteria for exit and make it more rigorous?

Would that be a good approach?

We didn’t see a lot of potential or return there. The yield turned out to not really be worth it, because any time you’re keeping kids in the program longer, you are keeping another kid out. A kid who then needs support isn’t getting it, when the student in the program is doing fine. So we ended up not really evaluating new exit criteria in practice.

So then we shifted our focus to think less about what happens before kids exit to what we could do after the exit.

Assessment as a Means to Solve the Problem

What did you wind up trying?

One thing we discussed was giving kids some extra practice after exit. We started thinking about the least invasive form of practice, which was conveniently already baked into the experiences of kids while they were in the program—each week during the intervention, tutors monitor progress of students using a short, minute-long assessment of reading fluency. So we thought why not just keep that going after the intervention? Progress monitoring is something that has been documented previously as something that can improve students’ academic achievement — but only by way of informing instruction or adapting to their needs. It’s never been discussed as something that is inherently beneficial.

If you think about progress monitoring as a task, though, kids are getting an opportunity to practice a skill that they’re being tested on at the end of the year. In this case, it is the exact skill – reading from a passage – they’re being tested on. They also are getting feedback on how they’re doing, and they’re getting a reminder of what the goal is for the end of the year.

These are really powerful things that we talk about in intervention – opportunity to respond, opportunity to engage in the task, and feedback. So that was our hypothesis – that continued progress monitoring after kids have exited Reading Corps could make a difference for long term outcomes.

You were able to test the hypothesis through a research pilot. What happened?

We saw a 10 to 14 percent increase in the probability of meeting the end-of-year benchmark among kids who got post-exit progress monitoring. This struck us as a really promising impact given the low level of time and resources involved.

This year, we are in the middle of a randomized control trial of post-exit progress monitoring – we have 100 sites, 50 of which will continue to monitor the progress of kids weekly after they exit from Reading Corps and 50 sites that are not doing that. It’s a really minor change to programming with a big potential payoff.

How were able to identify this issue and implement change so quickly?

We’re able to do it largely because we have infrastructure that supports innovation. It supports the analysis – we have pretty sophisticated data systems, where we know how kids are performing and growing, but we also know information about their experiences. We know how many minutes they’re getting, when they’re getting support, what exactly they’re doing, where they are geographically and what kind of tutors they’re working with. It’s a really rich dataset. Not a lot of folks in academia have access to that kind of data. It’s millions of cases and thousands of kids.

The other piece is we have this program that is serving all of these kids, and it’s still relatively nimble. In a year’s time, we can say, “We learned this, now let’s change this.” And there aren’t a lot of analogs to that. I don’t think in your typical education setting you can say, “We found this out, we’re going to make this change.” We can. In this case, we might just make it for a subgroup, but if we find out positive results this year, it’s something we can rapidly scale for everybody nationally, which is great.

What would it take to decide to rapidly scale that change nationally?

If we see ANY impact that is statistically significant, meaning that the kids who got post-exit progress monitoring in the randomized control trial this year were better off at the end of the year than similar kids who were not participating, that will be enough for us to make the change. If we see the same effect, that would be great. Even if it’s just 10 percent, that would be enough. Getting one additional student out of every 10 to meet their benchmark at the scale of thousands of kids, is something that’s notable.

Learn more about the proven impact of Reading Corps and how to become a Reading Corps tutor.

Related Articles