Give Sean Cavanagh at Education Week credit for a relatively balanced report on tax credit scholarship issues that have been raised in several states, though not all of his sources displayed a similar rigor. The story offers us a snapshot into how one noted academic researcher draws financial conclusions, and the picture is not pretty.
In 2008, in fact, Welner raised a legitimate question about a state agency report that concluded the Florida Tax Credit Scholarship saved taxpayers $38.9 million in 2007-08. He criticized the agency, called the Office of Public Policy Analysis and Government Accountability (OPPAGA), for using what amounted to an educated guess that 90 percent of low-income students who chose the scholarship would have otherwise attended public schools. This is one among many factors that are critical to the evaluation because students who would otherwise have attended a private school save the state no money. Welner focused on this 90 percent figure and went so far as describe it as a form of “smoke and mirrors.”
Fast forward four years. Four different independent organizations, including highly regarded OPPAGA and the nonpartisan Florida Consensus Revenue Estimating Conference, have now issued seven different reports that all conclude the program saves money. In both the second OPPAGA report, issued in 2010, and an estimating conference projection issued in March (page 36), fiscal analysts turned to the U.S. Census for some answers. OPPAGA used the 2000 U.S. Census, the year before scholarships were enacted, and analyzed the Public Use Microdata Sample to determine that the 90 percent estimate was actually too low. The actual percentage of families in the relevant income category who attended public schools that year was 94.6 percent. The estimating conference went even further, combining American Community Survey data from 2005-09 with private school enrollment data to make projections about the actual number of low-income students enrolled in each grade level in private schools in 2012.
The results: The second OPPAGA reported found savings of $34.6 million in 2008-09, and the estimating conference projected a savings of $57.9 million for 2012-13.
Asked to respond to this new analytical consensus in Florida, though, and Dr. Welner largely reprised his 2008 remarks.
First, he questioned the assumption that public school expenses would “automatically fall” as students leave to attend other options, though neither OPPAGA nor the estimating conference assume as much. Instead, their approach calibrates the savings. Second, and more worrisome, he again challenged whether the researchers had valid information about the percentage of students who would have attended private schools even without the scholarship. To use his own words: “We simply have no idea how many switchers, or non-switchers, there are.”
No idea?
In fact, we now have census data, and that is not guesswork. It is a valuable tool that informs public policy in myriad ways. These are state professionals whose jobs hinge on their ability to make accurate revenue and expense forecasts, and his criticism of their work also carried with it a whiff of intellectual disdain. In this case, though, he was the one serving up guesswork and he was the one dismissing out of hand what most social scientists would view as meaningful data.
It seems fair, then, to recite the second sentence from Dr. Welner’s mission statement. “We are guided,” it says, “by the belief that the democratic governance of public education is strengthened when policies are based on sound evidence.” He might want to read more carefully before he guesses again.
Hello Mr. East
Allow me to make a few quick points.
While I appreciate the addition of broad population survey and census data, I very much still conclude that estimates like the ones you quote are — in a very real sense — based on guesswork. My argument all along has been that the overwhelming number and importance of unknowns (certainly but not exclusively about switchers versus non-switchers) makes the costing-out process unreliable. At best, a range — a confidence interval — should be presented, with all the estimates/guesses set forth explicitly. In the “NeoVouchers” book I even provide an algebraic formula as a way of presenting the interrelationship between key variables that would determine overall taxpayer savings or costs.
Having said that, I should note that my comment about fixed costs was focused on a cost-estimate document from North Carolina, not from Florida. The NC document did, in fact, base the cost savings on local cost (which overcame state losses).
As you know, each state’s neovoucher law is different — in pretty substantial ways. The changes of neovouchers saving taxpayers money is much greater in Florida than in Georgia or Arizona, for instance. I discuss the importance of these various provisions in the NeoVouchers book. Yet, while we can probably safely say, for example, that the policy will reach more low-income switchers if it is means-tested, those are relative comparisons based on logic.
Here’s another thought. In Georgia, and as cited in the New York Times article about neovouchers from a month or two ago, the Southern Education Foundation used a comparison of private school enrollment before and after the neovoucher law to conclude that the recipients were overwhelmingly non-switchers. That is, since the private school enrollment increased only incrementally at a time when the neovouchers were being scaled up, the SEF argued for the reasonable conclusion that the money was going to students who would have enrolled anyway (and is therefore costing the state taxpayers).
When I’ve spoken with reporters about fiscal estimates, I’ve pointed out weaknesses in the approach you’ve praised (broad population survey and census data) and the approach that SEF used. I’ve tried to caution against reliance on either approach. Even used together, these approaches would at best yield an estimate with a large confidence interval.
None of this is an argument against neovouchers — it’s an argument against selling (or attacking) neovouchers based on very weak data and therefore very weak calculations and bottom lines.
Dr. Welner,
Thank you for your comments. I definitely agree that we should use as many data points as possible, though I would surely rate Census data well above the level of guesswork. I also agree that the policy framework for tax credit scholarships varies greatly from state to state. Computing the costs in Georgia, for example, would be greatly complicated by the fact that the law sets no income guidelines on recipients and that the maximum allowed scholarship is more than double that in other states — factors that would have a negative impact on cost.
My pitch, then, was simply about Florida. This state limits the scholarship to students whose household income is no more than 185 percent of poverty and the average this year is only 12 percent above poverty. The scholarship amount this year is only 64 percent of the operational portion of public school funding, or less than half the total. The program has been evaluated seven different times by four different independent groups, with varying approaches and different data sets, and all have reached the conclusion that money is saved. So I was in fact making the case that the Florida financial data is conclusive, and I’m hoping we can at least agree that it can be distinguished from others that you feel involve “very weak calculations.”
Weak data is definitely no way to sell a program either at a state or national level. But I worry that you seem unwilling to go further and to draw distinctions in that regard between individual states and individual policy frameworks. We no doubt differ on whether more scholarship programs should exist, but we would both agree that any new state that is considering a tax credit scholarship should mimic the policies that are best supported by real data. We would both want them to be viable, to be harmonious with traditional public school infrastructure and to help poor children.
Thanks again for your comments.
Jon
[…] attending public schools in the year before the STC program took effect. As Jon East explained in RedefinED, “The estimating conference went even further, combining American Community Survey data […]