A new report on the academic performance of low-income students receiving Tax Credit Scholarships in Florida finds they are making modestly larger gains in reading and math than their counterparts in public school.
That conclusion from 2009-10 test data is encouraging for those of us who work to provide these learning options, which served 34,550 low-income students statewide last year. But the report, released today and written by respected Northwestern University researcher David Figlio, is also a reminder of the inherent complexities of judging whether these programs work.
Figlio has both a brilliant mind and 13,829 test scores with which to work, and yet his report is filled with qualifiers and provisos and cautionary notes. That’s largely because the scholarship program is so different from the typical public education option. In this case, students are attending more than 1,000 private schools where, on average, four of every five students pay their own tuition. The average scholarship enrollment in each school, for 2009-10, was only 28 students.
That kind of school profile tends to serve as an asset to the economically disadvantaged students, but not necessarily for the standard approach to academic oversight. Since these are mostly private-market schools, the state won’t allow them to administer the state test, known as the Florida Comprehensive Assessment Test (FCAT). But the law does appropriately require every scholarship student to take a nationally norm-referenced approved by the state Department of Education, and most students take the well-regarded Stanford Achievement Test.
These tests do allow Figlio to make direct national comparisons, so we know without qualification that the typical scholarship student scored at the 45th percentile in reading and the 46th percentile in math. We also know that their year-to-year gain from 2008-09 to 2009-10 was the same as students of all income levels nationally, which is a solid piece of academic evidence
Where things get more muddled is in trying to compare to low-income students in Florida public schools. As odd as this may sound, the two groups are substantially different. And they are different in ways that tend to be counterintuitive.
First, Figlio has looked at the state test scores for students prior to choosing the scholarship and has determined for three years running that these students are among the lowest-performing in the public schools they leave behind. “Scholarship participants have significantly poorer test performance in the year prior to starting the scholarship program than do non-participants,” he wrote. “… These differences are large in magnitude and are statistically significant, and indicate that scholarship participants tend to be considerably more disadvantaged and lower-performing upon entering the program than their non-participating counterparts.”
Second, the scholarship students are poorer. Eligibility to enter the program is the same as eligibility for free or reduced price-lunch, which is 185 percent of poverty. But household income for every scholarship student is verified every year, and income is checked for only 3 percent of public school students. So we know the average scholarship student household income in 2009-10 was only 118 percent of poverty. By comparison, we know that in a recent review of school lunch audits, 62 percent of the public school students refused or failed the income audit.
Third, the racial makeup is different as well. The new scholarship students in 2009-10 were more likely to be black (48 vs. 33 percent) and less likely to be white (21 vs. 27 percent).
So, Figlio turns to “regression discontinuity” and is prone to saying things like this: “the results must be interpreted with considerable caution.” Among his more concrete assertions, he writes that: “The estimated effects of program participation on math performance are statistically significantly positive at conventional levels… and the estimated effects on reading performance are significantly positive in the case of reading.” He adds: “These differences, while not large in magnitude, are larger and more statistically significant than in the past year’s results, suggesting that successive cohorts of participating students may be gaining ground over time.”
In sum, the report is neither definitive nor an endorsement. But it does add genuine value to our understanding of how these underprivileged children are faring. It tells us that struggling students are choosing the scholarship and that in the most recent year they were marginally outperforming their public school counterparts.
Given the size and reach of Florida’s program, any evaluation tends to receive close scrutiny. So we should add some other points of context as well. We know from a state-commissioned survey in 2009 that fully 95.4 percent of scholarship parents rate their school as “good” or “excellent.” We know from a 2010 academic study on competitive effects that public schools most impacted by the scholarship program are actually performing better as well. And we know from a 2008 state watchdog agency report that the scholarship saves $38.9-million a year that can be used to improve traditional public schools. These are all pieces of the policy puzzle, as we continue to examine whether this option for the poorest of our schoolchildren strengthens public education.